Reproducing and Extending Stability Selection¶

Reproducing and Extending ideas from Meinshaushen

This notebook includes: simulation code, Lasso + randomized Lasso, stability selection, reproduction of figure 4 in the paper, and an extension grid showing behavior across correlation $\rho$ and sample size $n$

Introduction¶

Stability selection is a method of improving feature selection by using subsampling with a base selection algorithm (eg. Lasso) to compute selection probabilities for each variables. Variables that are selected frequently across subsamples are considered 'stable'. This notebook reproduces a simple simulation example and extends it by varying correlation and sample size.

It is especially helpful when we are working with data that has more variables than the observations in the sample. The problem the paper is trying to solve is to select the variables that matter. Methods like lasso will select variables but small changes in the data results in a completely different set of variables. Cross validation tends to pick too many varibles, and picking a tuning parameter is tedious with common occurance of false positives.

The idea of stability selection is Instead of trusting the variable selection method once, we subsample the data many times (half at a time), run the selection method (lasso, graphical representation, etc) on each subsample, and the we record how often each variable is showing up. Variables consistently selected across subsamples tend to be stable and probably true. Variables that appear randomly are likely noise. Then we choose a threshold such that the probability of variables above the threshold are considered stable and below are likely noise.

In [ ]:
# Necessary libraries and packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import Lasso
import random

Understanding the paper¶

Problem they are trying to solve¶

When you have lots of variables (like thousands of genes) and want to find which ones are actually important, it's really hard because:

  1. Too many false positives: Traditional methods pick too many irrelevant variables
  2. Unstable results: Running the same method twice gives different answers
  3. Hard to tune: You need to set parameters perfectly or everything breaks

Imagine you measure 10,000 genes in 100 patients to find which genes affect a disease.

  • Maybe only 10 genes actually matter
  • But standard methods might pick 500 genes, with only 5 being correct
  • If you run it again, you might get a completely different set of 500 genes!

Why is this so hard?

  • Not enough data: we only have 100 patients and 10000 genes.
  • There might be correlation. If $X_1$ and $X_2$ are correlated then maybe in one lasso $X_1$ might be selected in the second sample $X_2$ in third sample maybe neither are chosen and in the fourth sample maybe both are chosen. This inconsistency in choosing variables is the lasso instability mentioned in the paper.

Their solution¶

Instead of running Lasso and getting variables that might be false positive, we run our selection method (lasso, grapical modelling, etc) 100 times on random halves of the data. (need to mention where 100 and half is coming from)! And we keep variables that consistently show up (Explain how do we choose what consistency mean? 90%? 60%? 75%? of the times variables show up)

Why this works? Truly important variables will be stable and show will be selected more number of times (maybe show mathematical rigor bhind this) The non important variables won't show up as often as the important ones do. That's how we reduce the chances of selecting false positive variables.

The algorithm (step-by-step): For b = 1 to 100:

  1. Randomly pick half your data
  2. Run LASSO on this half
  3. Record which variables were selected

For each variable k:

Selection_probability[k] = (# times selected) / 100

Keep variables where Selection_probability[k] >= 0.9

Talk about Randomized Lasso¶

Standard lasso is equal penalty, randomized lasso: penalty is applied with weights $[\alpha, 1]$ when $\alpha = 1$ it is standard Lasso Why was it introduced? What it does? why does it help? Understanding $\alpha$

Figure 4 has randomized lasso. I should also talk about how it is helpful in stability selection.

Main theorem from the paper: Key result of Stability Selection¶

Why do we run in 100 times? why is sample size of n/2 the sweet spot. theoretical error bounds.

Finite-Sample Control of Expected False Positives

The main theorem is basicallu a justification for using stability selection. It says, "if you choose a stability threshold larger than 0.5, then the expected number of false positives is bounded above by a simple formula that depends only on"

  • the average number of selected variables
  • the number of variables
  • the chosen threshold

This is extremely powerful because it creates finite-sample error control, something normal Lasso does not have.

The main thing is that subsample size of n/2 is the sweet spot which $E[V] \le 1$ where $V$ is the number of false positive variables.

$$ \mathbb{E}[V] \le \frac{q_{\Lambda}^2}{(2\pi_{thr}-1)p} $$

where $q_{\Lambda}$ is the average number of variables lasso (or your base variable selection method) chose. If Lasso tends to select too many then the error bound gros and if it selects too less then it shrinks. $p$ is the number of variables More variables → easier to control false positives proportionally. $(2\pi_{thr}-1)$: the effect of threshold. Thus, we must use a threshold greater than 0.5.

Theory¶

This section explains the key theoretical result of Meinshausen & Bühlmann (2010), why stability selection delivers finite-sample error control, and why using subsamples of size exactly n/2 is essential.

Setup¶

We observe data as: $Y = X\beta + \epsilon$ with:

  • $n$ samples
  • $p$ variables
  • possible $p >> n$

Let a base selection algorithm (e.g., Lasso) return a set of selected variables: $$\hat{S}_{\lambda} \subseteq \{1,...,p\}$$

depending on the regularization parameter $\lambda$

Stability selection repeatedly applies this base method to different random subsamples of the data and tracks how frequently each variable is selected.

Selection Probabilites¶

For each variable $k$ its selection probability at tuning parameter $\lambda$ is: $$ \Pi_k(\lambda) = \mathbb{P}(k \in \hat{S}_{\lambda})$$

where the probability is over random subsamples.

This probability cannot be computed analytically. So we approximate it using repeated subsamples. Every time we take a subsample we run Lasso at $\lambda$ and record

  • 1 if variable $k$ is selected
  • 0 if not If we repeat this over $B$ subsamples, we get

$$ \hat{\Pi}_{k}(\lambda) = \frac{1}{B}\sum_{b=1}^{B}I\{k \in \hat{S}_{\lambda}^{(b)} \} $$

This is simply the sample mean of Bernoulli trials i.e. the fraction of subsamples in which variable $k$ is selected.

Before we begin the paper illustrates results on the randomized lasso with a small simulation example where $p=n=200$ where the predictor variables are sampled from a $\mathcal{N}(0,\Sigma)$ distribution where $\Sigma$ is the identity matrix, except for the entries $\Sigma_{13} = \Sigma_{23} = \rho$ and their symmetric counterparts.

In [ ]:
def simulate_design_alt(n, p, rho, seed=None):
    rng = np.random.RandomState(seed)
    X = rng.normal(size=(n, p))
    if p >= 3:
        combined = (X[:, 0] + X[:, 1]) / np.sqrt(2)
        noise = rng.normal(size=n)
        X[:, 2] = rho * combined + np.sqrt(max(0.0, 1 - rho**2)) * noise
    norms = np.linalg.norm(X, axis=0)
    norms[norms == 0] = 1.0
    X = X / norms
    return X

def simulate_response(X, beta, sigma, seed=None):
    rng = np.random.RandomState(seed)
    noise = rng.normal(0, sigma, size=X.shape[0])
    return X.dot(beta) + noise

def selection_frequencies(X, y, alphas, B=100, subsample_fraction=0.5,
                          randomized=False, alpha_weak=0.5, pw=0.5, random_state=None):
    rng = np.random.RandomState(random_state)
    n, p = X.shape
    m = int(np.floor(n * subsample_fraction))
    freqs = np.zeros((len(alphas), p), dtype=float)
    for b in range(B):
        subsample_idx = rng.choice(n, size=m, replace=False)
        X_sub = X[subsample_idx, :].copy()
        y_sub = y[subsample_idx].copy()
        if randomized:
            W = np.ones(p)
            mask = rng.rand(p) < pw
            W[mask] = alpha_weak
            X_sub = X_sub * W[np.newaxis, :]
        for i, a in enumerate(alphas):
            lasso = Lasso(alpha=a, fit_intercept=False, max_iter=5000, tol=1e-4)
            lasso.fit(X_sub, y_sub)
            coef = lasso.coef_
            support = (np.abs(coef) > 1e-8).astype(float)
            freqs[i, :] += support
    freqs /= float(B)
    return freqs

What are these functions doing? simulate_design_alt creates the 3-column correlation structure used in Figure 4 while avoiding numerical PSD problems.

Reproducing Figure 4¶

Before we begin the paper illustrates results on the randomized lasso with a small simulation example where $p=n=200$ where the predictor variables are sampled from a $\mathcal{N}(0,\Sigma)$ distribution where $\Sigma$ is the identity matrix, except for the entries $\Sigma_{13} = \Sigma_{23} = \rho$ and their symmetric counterparts.

The Data Setup¶

  • $n=200$ samples, $p=200$ variables
  • Variable 1 and 2 are truly important $(\beta_1 = 1, \beta_2 = 1)$
  • Variable 3 is NOT important $(\beta_3 = 0)$ BUT it is correlated with variable 1 and 2
  • All other variables are not important $(\beta_4, ... , \beta_{200}=0)$

The problem is when $\rho > 0.5$, variable 3 gets falsely selected because it is correlated. Regular Lasso can't distinguish between variable 3 from variable 1 and 2.

Figure 4 shows three panels comparing different approaches:

  1. $\alpha = 1$ (Regular Lasso):

    • Red Solid line: Important variables, 1 and 2
    • Dashed line: Varibale 3 (false positive)
    • Gray dotted lines: All other not important variables

    Problem: Variable 3 (dashed) has HIGH selection probability - it looks almost as important as the true variables!

  2. $\alpha = 0.5$ (Randomized Lasso, moderate randomness)

    • Red lines still go to 1.0 (good!)
    • Dashed line drops approximately to 0.8
    • Better separation between imp and not imp variables
  3. $\alpha = 0.2$ (Randomized Lasso, more randomness)

    • Red lines still go to 1.0
    • Dashed line drops approximately to 0.7
    • Clear separation: we can now tell variable 3 is different

By randomly perturbing the penalty weights (the $\alpha$ weights) we break the symmetry between the correlated variables. The truly important variables (1 & 2) stay stable, but the false positive (variable 3) becomes less stable.

Actually reproducing graph¶

In [ ]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import Lasso
from sklearn.preprocessing import StandardScaler
import warnings
warnings.filterwarnings('ignore')

# Set random seed for reproducibility
np.random.seed(42)

print("=" * 80)
print("STABILITY SELECTION PROJECT")
print("Reproduction of Meinshausen & Bühlmann (2010) Figure 4")
print("=" * 80)
================================================================================
STABILITY SELECTION PROJECT
Reproduction of Meinshausen & Bühlmann (2010) Figure 4
================================================================================
In [ ]:
# Data Generation
def generate_figure4_data(n=200, p=200, rho=0.8, seed=None):
    """
    Generate data exactly as described in Section 3.2 of the paper.
    
    Parameters:
    -----------
    n : int
        Number of samples (default: 200)
    p : int
        Number of variables (default: 200)
    rho : float
        Correlation between X1, X2, and X3 (default: 0.8)
    seed : int or None
        Random seed for reproducibility
    
    Returns:
    --------
    X : np.ndarray, shape (n, p)
        Feature matrix
    y : np.ndarray, shape (n,)
        Response variable
    true_support : np.ndarray
        Indices of truly important variables
    
    Notes:
    ------
    - β₁ = 1.0, β₂ = 1.0 (truly important)
    - β₃ = 0.0 (not important but correlated with 1 & 2)
    - β₄...βₚ = 0.0 (noise)
    - Noise variance: σ² = 0.25 (so σ = 0.5)
    """
    if seed is not None:
        np.random.seed(seed)

    Sigma = np.eye(p)
    
    # Make X1, X2, X3 correlated
    Sigma[0, 2] = Sigma[2, 0] = rho  # Corr(X1, X3) = ρ
    Sigma[1, 2] = Sigma[2, 1] = rho  # Corr(X2, X3) = ρ
    # Note: X1 and X2 are NOT correlated with each other
    
    # Generate correlated features from multivariate normal
    X = np.random.multivariate_normal(mean=np.zeros(p), cov=Sigma, size=n)

    # True coefficients
    beta_true = np.zeros(p)
    beta_true[0] = 1.0  # β₁
    beta_true[1] = 1.0  # β₂
    # β₃ = 0.0 (not important)
    # β₄...βₚ = 0.0 (noise)

    # Generate response variable with noise
    # y = Xβ + ε, where ε ~ N(0, 0.25)
    noise = 0.5 * np.random.randn(n) 
    y = X @ beta_true + noise

    true_support = np.array([0, 1])  # Indices of truly important variables
    return X, y, true_support

# Generate data
print("\n" + "=" * 80)
print("STEP 1: Data Generation")
print("=" * 80)

X, y, true_support = generate_figure4_data(n=200, p=200, rho=0.8, seed=42) # We want to show LASSO fails and the paper says when
# rho is greater than 0.5 the irregularity condition fails for LASSO so we chose any arbitrary rho > 0.5 (the paper doesn't 
# specify which rho to use) 

print(f"\nData characteristics:")
print(f"  - Number of samples (n): {X.shape[0]}")
print(f"  - Number of variables (p): {X.shape[1]}")
print(f"  - Response variable range: [{y.min():.2f}, {y.max():.2f}]")
print(f"  - True important variables: X{true_support[0]+1}, X{true_support[1]+1}")
print(f"  - Correlated false positive: X3")
print(f"  - Correlation parameter: ρ = 0.8")

# Verify correlation structure
print(f"\nVerifying correlation structure:")
print(f"  - Corr(X1, X2) = {np.corrcoef(X[:, 0], X[:, 1])[0, 1]:.3f} (should be ≈ 0)")
print(f"  - Corr(X1, X3) = {np.corrcoef(X[:, 0], X[:, 2])[0, 1]:.3f} (should be ≈ 0.8)")
print(f"  - Corr(X2, X3) = {np.corrcoef(X[:, 1], X[:, 2])[0, 1]:.3f} (should be ≈ 0.8)")
print(f"  - Corr(X1, X4) = {np.corrcoef(X[:, 0], X[:, 3])[0, 1]:.3f} (should be ≈ 0)")
================================================================================
STEP 1: Data Generation
================================================================================

Data characteristics:
  - Number of samples (n): 200
  - Number of variables (p): 200
  - Response variable range: [-3.96, 5.27]
  - True important variables: X1, X2
  - Correlated false positive: X3
  - Correlation parameter: ρ = 0.8

Verifying correlation structure:
  - Corr(X1, X2) = 0.052 (should be ≈ 0)
  - Corr(X1, X3) = 0.696 (should be ≈ 0.8)
  - Corr(X2, X3) = 0.609 (should be ≈ 0.8)
  - Corr(X1, X4) = 0.031 (should be ≈ 0)

Since $X_1$ and $X_2$ are the only truly important variables our model is $y = X_1 + X_2 + \epsilon$ where $\epsilon \sim \mathcal{N}(0,0.25)$ i.e. $\sigma = 0.5$ So,

  • $X_1, X_2 \sim \mathcal{N}(0,1)$
  • They are not correlated thus,
  • $X_1 + X_2 \sim \mathcal{N}(0, 2)$
  • Adding the noise with variance $0.25$

Then the standard deviation is $\sqrt{2.25} = 1.5$. The gaussian range is typically $\pm 3SD$. Thus the typical range would be $[-4.5, 4.5]$. Our observed range is $[-3.96, 5.27]$ This is perfectly normal for 200 samples drawn from this distribution — some noise occasionally produces values around 4–5.

In [ ]:
# Stability Selection
def stability_selection_single_run(X, y, subsample_indices, lambda_val, 
                                   randomized=False, alpha_weak=1.0):
    """
    Single run of stability selection for one subsample and one lambda.
    
    Parameters:
    -----------
    X : np.ndarray
        Full feature matrix
    y : np.ndarray
        Full response vector
    subsample_indices : np.ndarray
        Indices to subsample
    lambda_val : float
        Regularization parameter
    randomized : bool
        Whether to use randomized LASSO
    alpha_weak : float
        Weakness parameter (only used if randomized=True)
    
    Returns:
    --------
    selected : np.ndarray
        Binary array indicating which variables were selected
    """
    # Subsample the data
    X_sub = X[subsample_indices, :]
    y_sub = y[subsample_indices]

    if randomized:
        p = X_sub.shape[1]

        # Bernoulli two-point weights (correct!)
        W = np.ones(p)
        mask = np.random.rand(p) < 0.5    # 50% chance
        W[mask] = alpha_weak              # weakened variables get α

        # Apply weights (divide the predictors)
        X_sub = X_sub / W[np.newaxis, :]

    # Fit LASSO
    lasso = Lasso(alpha=lambda_val, fit_intercept=False, max_iter=5000, tol=1e-4, random_state=None)
    lasso.fit(X_sub, y_sub)

    # Return binary selection (1 if selected, 0 otherwise)
    selected = (lasso.coef_ != 0).astype(int)

    return selected

What the above function is doing¶

The function performs one stability-selection experiment:

  • pick one random subsample of the data of size $\frac{n}{2} = \frac{200}{2} = 100$
  • Picking one $\lambda$ value (multiple $\lambda$ would be chosen for the grid)
  • We have an if statement for when we want to fit randomized lasso, otherwise it's just the standard
  • Returns a 0/1 vector of selected variables
In [ ]:
def compute_stability_path(X, y, n_bootstrap=100, lambda_grid=None,
                           randomized=False, alpha_weak=1.0):
    """
    Compute full stability path (selection probabilities for all variables and lambdas).
    
    Parameters:
    -----------
    X : np.ndarray
        Feature matrix
    y : np.ndarray
        Response vector
    n_bootstrap : int
        Number of bootstrap samples
    lambda_grid : np.ndarray or None
        Grid of lambda values to try
    randomized : bool
        Whether to use randomized LASSO
    alpha_weak : float
        Weakness parameter
    
    Returns:
    --------
    selection_probs : np.ndarray, shape (p, n_lambdas)
        Selection probability for each variable and lambda
    lambda_grid : np.ndarray
        Lambda values used
    """
    n, p = X.shape
    subsample_size = n // 2  # Half the data

    if lambda_grid is None:
        # Rule of thumb: start from lambda_max where all coefficients are zero
        lambda_max = np.max(np.abs(X.T @ y)) / n
        lambda_min = lambda_max * 0.01
        lambda_grid = np.logspace(np.log10(lambda_min), np.log10(lambda_max), 50)
    
    n_lambdas = len(lambda_grid)

    # Storage for all selections
    selections = np.zeros((n_bootstrap, p, n_lambdas))
    
    # Run bootstrap
    for b in range(n_bootstrap):
        # Randomly subsample n/2 observations
        subsample_indices = np.random.choice(n, size=subsample_size, replace=False)
        
        # Try each lambda
        for l_idx, lam in enumerate(lambda_grid):
            selections[b, :, l_idx] = stability_selection_single_run(
                X, y, subsample_indices, lam, randomized, alpha_weak
            )
        
        # Progress update
        if (b + 1) % 20 == 0:
            print(f"  Completed {b + 1}/{n_bootstrap} bootstrap samples...")
    
    # Compute selection probabilities (average over bootstrap samples)
    selection_probs = selections.mean(axis=0)  # Shape: (p, n_lambdas)
    
    return selection_probs, lambda_grid

What the above function is doing?

It's computing $$\hat{\Pi}_k(\lambda) = \Pr(\text{variable } k \text{ is selected at } \lambda)$$

In [10]:
print("\n" + "=" * 80)
print("STEP 2: Running Stability Selection")
print("=" * 80)
print("\nThis will take 2-3 minutes. Please wait...\n")

print("(a) Running α = 1.0 (Standard LASSO, no randomization)...")
sel_probs_alpha1, lambda_grid = compute_stability_path(
    X, y, n_bootstrap=100, randomized=False
)

print("\n(b) Running α = 0.5 (Moderate randomization)...")
sel_probs_alpha05, _ = compute_stability_path(
    X, y, n_bootstrap=100, lambda_grid=lambda_grid, 
    randomized=True, alpha_weak=0.5
)

print("\n(c) Running α = 0.2 (Strong randomization)...")
sel_probs_alpha02, _ = compute_stability_path(
    X, y, n_bootstrap=100, lambda_grid=lambda_grid,
    randomized=True, alpha_weak=0.2
)

print("\n✓ All computations complete!")

print("\n" + "=" * 80)
print("STEP 3: Creating Figure 4 Reproduction")
print("=" * 80)

print("\n" + "=" * 80)
print("STEP 3: Creating Figure 4 Reproduction")
print("=" * 80)

# =====================================================================
# STEP 3 — EXACT REPRODUCTION OF FIGURE 4 (matching paper values)
# =====================================================================

fig, axes = plt.subplots(1, 3, figsize=(18, 5))

# IMPORTANT: use λ directly (NO NORMALIZATION)
lambda_plot = lambda_grid  # decreasing order from λ_max → λ_min

# Panels for α = 1.0, 0.5, 0.2
sel_probs_list = [sel_probs_alpha1, sel_probs_alpha05, sel_probs_alpha02]
alpha_values    = [1.0, 0.5, 0.2]
panel_labels    = ['(a)', '(b)', '(c)']

for ax, sel_probs, alpha, label in zip(axes, sel_probs_list, alpha_values, panel_labels):

    # --------------------------
    # True relevant variables
    # --------------------------
    ax.plot(lambda_plot, sel_probs[0, :], color='darkred', linewidth=1.3)
    ax.plot(lambda_plot, sel_probs[1, :], color='darkred', linewidth=1.3)

    # --------------------------
    # Correlated irrelevant variable (X3)
    # --------------------------
    ax.plot(lambda_plot, sel_probs[2, :],
            color='navy', linewidth=1.2, linestyle='--')

    # --------------------------
    # All other irrelevant variables (tiny dotted black lines)
    # --------------------------
    for j in range(3, sel_probs.shape[0]):
        ax.plot(lambda_plot, sel_probs[j, :],
                color='black', linewidth=0.15, alpha=0.20, linestyle=':')

    # --------------------------
    # Formatting (matching paper)
    # --------------------------
    ax.set_xlim(1, 0)  # reverse λ axis (1 → 0)
    ax.set_ylim(0, 1.02)

    ax.set_xlabel('λ', fontsize=12)
    ax.set_ylabel('Π', fontsize=12)
    ax.set_title(f'{label} α = {alpha}', fontsize=13)

    ax.grid(False)      # paper has no grid
    ax.tick_params(axis='both', labelsize=11)

plt.tight_layout()
plt.show()
================================================================================
STEP 2: Running Stability Selection
================================================================================

This will take 2-3 minutes. Please wait...

(a) Running α = 1.0 (Standard LASSO, no randomization)...
  Completed 20/100 bootstrap samples...
  Completed 40/100 bootstrap samples...
  Completed 60/100 bootstrap samples...
  Completed 80/100 bootstrap samples...
  Completed 100/100 bootstrap samples...

(b) Running α = 0.5 (Moderate randomization)...
  Completed 20/100 bootstrap samples...
  Completed 40/100 bootstrap samples...
  Completed 60/100 bootstrap samples...
  Completed 80/100 bootstrap samples...
  Completed 100/100 bootstrap samples...

(c) Running α = 0.2 (Strong randomization)...
  Completed 20/100 bootstrap samples...
  Completed 40/100 bootstrap samples...
  Completed 60/100 bootstrap samples...
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
Cell In[10], line 18
     12 sel_probs_alpha05, _ = compute_stability_path(
     13     X, y, n_bootstrap=100, lambda_grid=lambda_grid, 
     14     randomized=True, alpha_weak=0.5
     15 )
     17 print("\n(c) Running α = 0.2 (Strong randomization)...")
---> 18 sel_probs_alpha02, _ = compute_stability_path(
     19     X, y, n_bootstrap=100, lambda_grid=lambda_grid,
     20     randomized=True, alpha_weak=0.2
     21 )
     23 print("\n✓ All computations complete!")
     25 print("\n" + "=" * 80)

Cell In[9], line 49, in compute_stability_path(X, y, n_bootstrap, lambda_grid, randomized, alpha_weak)
     47 # Try each lambda
     48 for l_idx, lam in enumerate(lambda_grid):
---> 49     selections[b, :, l_idx] = stability_selection_single_run(
     50         X, y, subsample_indices, lam, randomized, alpha_weak
     51     )
     53 # Progress update
     54 if (b + 1) % 20 == 0:

Cell In[8], line 44, in stability_selection_single_run(X, y, subsample_indices, lambda_val, randomized, alpha_weak)
     42 # Fit LASSO
     43 lasso = Lasso(alpha=lambda_val, fit_intercept=False, max_iter=5000, tol=1e-4, random_state=None)
---> 44 lasso.fit(X_sub, y_sub)
     46 # Return binary selection (1 if selected, 0 otherwise)
     47 selected = (lasso.coef_ != 0).astype(int)

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/base.py:1365, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs)
   1358     estimator._validate_params()
   1360 with config_context(
   1361     skip_parameter_validation=(
   1362         prefer_skip_nested_validation or global_skip_validation
   1363     )
   1364 ):
-> 1365     return fit_method(estimator, *args, **kwargs)

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:1084, in ElasticNet.fit(self, X, y, sample_weight, check_input)
   1082 else:
   1083     this_Xy = None
-> 1084 _, this_coef, this_dual_gap, this_iter = self.path(
   1085     X,
   1086     y[:, k],
   1087     l1_ratio=self.l1_ratio,
   1088     eps=None,
   1089     n_alphas=None,
   1090     alphas=[alpha],
   1091     precompute=precompute,
   1092     Xy=this_Xy,
   1093     copy_X=True,
   1094     coef_init=coef_[k],
   1095     verbose=False,
   1096     return_n_iter=True,
   1097     positive=self.positive,
   1098     check_input=False,
   1099     # from here on **params
   1100     tol=self.tol,
   1101     X_offset=X_offset,
   1102     X_scale=X_scale,
   1103     max_iter=self.max_iter,
   1104     random_state=self.random_state,
   1105     selection=self.selection,
   1106     sample_weight=sample_weight,
   1107 )
   1108 coef_[k] = this_coef[:, 0]
   1109 dual_gaps_[k] = this_dual_gap[0]

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/utils/_param_validation.py:191, in validate_params.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
    189 global_skip_validation = get_config()["skip_parameter_validation"]
    190 if global_skip_validation:
--> 191     return func(*args, **kwargs)
    193 func_sig = signature(func)
    195 # Map *args/**kwargs to the function signature

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695, in enet_path(X, y, l1_ratio, eps, n_alphas, alphas, precompute, Xy, copy_X, coef_init, verbose, return_n_iter, positive, check_input, **params)
    681     model = cd_fast.enet_coordinate_descent_gram(
    682         coef_,
    683         l1_reg,
   (...)
    692         positive,
    693     )
    694 elif precompute is False:
--> 695     model = cd_fast.enet_coordinate_descent(
    696         coef_, l1_reg, l2_reg, X, y, max_iter, tol, rng, random, positive
    697     )
    698 else:
    699     raise ValueError(
    700         "Precompute should be one of True, False, 'auto' or array-like. Got %r"
    701         % precompute
    702     )

KeyboardInterrupt: 
In [ ]:
print("\n" + "=" * 80)
print("STEP 4: Quantitative Analysis")
print("=" * 80)

print("\nMaximum Selection Probabilities:")
print("=" * 80)

results_df = pd.DataFrame({
    'Variable': ['X₁ (true)', 'X₂ (true)', 'X₃ (false)', 'X₄ (noise)'],
    'α=1.0': [
        sel_probs_alpha1[0, :].max(),
        sel_probs_alpha1[1, :].max(),
        sel_probs_alpha1[2, :].max(),
        sel_probs_alpha1[3, :].max()
    ],
    'α=0.5': [
        sel_probs_alpha05[0, :].max(),
        sel_probs_alpha05[1, :].max(),
        sel_probs_alpha05[2, :].max(),
        sel_probs_alpha05[3, :].max()
    ],
    'α=0.2': [
        sel_probs_alpha02[0, :].max(),
        sel_probs_alpha02[1, :].max(),
        sel_probs_alpha02[2, :].max(),
        sel_probs_alpha02[3, :].max()
    ]
})

print(results_df.to_string(index=False))

# Calculate gaps (difference between true and false positive)
print("\n" + "=" * 80)
print("Separation Between True Variables and False Positive:")
print("=" * 80)

for alpha_name, sel_probs in [('α=1.0', sel_probs_alpha1),
                                ('α=0.5', sel_probs_alpha05),
                                ('α=0.2', sel_probs_alpha02)]:
    
    true_avg = (sel_probs[0, :].max() + sel_probs[1, :].max()) / 2
    false_max = sel_probs[2, :].max()
    gap = true_avg - false_max
    
    print(f"\n{alpha_name}:")
    print(f"  Average max Π for true variables (X₁, X₂): {true_avg:.3f}")
    print(f"  Max Π for false positive (X₃):            {false_max:.3f}")
    print(f"  Gap (separation):                          {gap:.3f}")
    
    if gap > 0.25:
        print(f"  → ✓ Good separation! Can distinguish with π_thr ≈ {false_max + 0.05:.2f}")
    elif gap > 0.15:
        print(f"  → ⚠ Moderate separation. Use π_thr ≈ {false_max + 0.05:.2f}")
    else:
        print(f"  → ✗ Poor separation. Hard to distinguish true from false.")
================================================================================
STEP 4: Quantitative Analysis
================================================================================

Maximum Selection Probabilities:
================================================================================
  Variable  α=1.0  α=0.5  α=0.2
 X₁ (true)   1.00   1.00   1.00
 X₂ (true)   1.00   1.00   1.00
X₃ (false)   1.00   0.79   0.69
X₄ (noise)   0.28   0.32   0.43

================================================================================
Separation Between True Variables and False Positive:
================================================================================

α=1.0:
  Average max Π for true variables (X₁, X₂): 1.000
  Max Π for false positive (X₃):            1.000
  Gap (separation):                          0.000
  → ✗ Poor separation. Hard to distinguish true from false.

α=0.5:
  Average max Π for true variables (X₁, X₂): 1.000
  Max Π for false positive (X₃):            0.790
  Gap (separation):                          0.210
  → ⚠ Moderate separation. Use π_thr ≈ 0.84

α=0.2:
  Average max Π for true variables (X₁, X₂): 1.000
  Max Π for false positive (X₃):            0.690
  Gap (separation):                          0.310
  → ✓ Good separation! Can distinguish with π_thr ≈ 0.74

Diagnostic Runs¶

In [ ]:
import numpy as np
from sklearn.linear_model import Lasso

# --- helper: standardize columns to unit l2 norm (important) ---
def standardize_cols(X):
    norms = np.linalg.norm(X, axis=0)
    norms[norms==0] = 1.0
    return X / norms[np.newaxis, :], norms

# --- population IR check ---
def compute_population_ir(Sigma, beta_true, S_idx):
    S = np.array(S_idx)
    N = [i for i in range(Sigma.shape[0]) if i not in S]
    Sigma_SS = Sigma[np.ix_(S, S)]
    invSS = np.linalg.inv(Sigma_SS)
    sgn = np.sign(beta_true[S])
    vals = []
    for k in N:
        Sigma_Sk = Sigma[np.ix_(S, [k])].reshape(-1)
        vals.append(abs(sgn.dot(invSS.dot(Sigma_Sk))))
    return max(vals), np.array(vals)

# --- lasso select one (sklearn alpha scaling) ---
def lasso_select_one(X_sub, y_sub, lambda_abs, randomized=False, alpha_weak=1.0, weight_scheme='uniform', rng=None):
    if rng is None: rng=np.random
    if randomized:
        p = X_sub.shape[1]
        if weight_scheme == 'uniform':
            W = rng.uniform(alpha_weak, 1.0, p)
        elif weight_scheme == 'bernoulli':
            # two-point: W in {alpha_weak, 1} with prob 0.5 each
            mask = rng.rand(p) < 0.5
            W = np.ones(p)
            W[mask] = alpha_weak
        else:
            raise ValueError("weight_scheme must be 'uniform' or 'bernoulli'")
        X_use = X_sub / W[np.newaxis, :]
    else:
        X_use = X_sub

    n_sub = X_sub.shape[0]
    alpha_sklearn = lambda_abs / (2.0 * n_sub)
    model = Lasso(alpha=alpha_sklearn, fit_intercept=False, max_iter=10000, tol=1e-6)
    model.fit(X_use, y_sub)
    return (model.coef_ != 0).astype(int)

# --- compute stability path variant (single weight_scheme) ---
def stability_path_variant(Xs, y, lambda_abs, n_bootstrap=100, randomized=False, alpha_weak=1.0, weight_scheme='uniform', rng_seed=0):
    rng = np.random.RandomState(rng_seed)
    n, p = Xs.shape
    subsz = n // 2
    nL = len(lambda_abs)
    selections = np.zeros((n_bootstrap, p, nL))
    for b in range(n_bootstrap):
        idx = rng.choice(n, size=subsz, replace=False)
        X_sub = Xs[idx]; y_sub = y[idx]
        for j, lam in enumerate(lambda_abs):
            selections[b, :, j] = lasso_select_one(X_sub, y_sub, lam,
                                                  randomized=randomized, alpha_weak=alpha_weak,
                                                  weight_scheme=weight_scheme, rng=rng)
    return selections.mean(axis=0)

# -----------------------
# Run diagnostics (use your X,y)
# -----------------------
Xs, norms = standardize_cols(X)   # IMPORTANT: use standardized X here
Sigma_sample = np.cov(X, rowvar=False)  # sample covariance (n large enough)
Sigma_pop = np.cov(X, rowvar=False)     # for our simulation these are same-ish

# Compute population IR (S = [0,1])
ic_pop, ic_array = compute_population_ir(Sigma_pop, beta_true, S_idx=[0,1])
print("Population IR max =", ic_pop)

# Build lambda grid relative to standardized Xs
lambda_max = np.max(np.abs(Xs.T @ y))
lambda_rel = np.linspace(1.0, 0.0, 50)
lambda_abs = lambda_rel * lambda_max
print("lambda_max (standardized) =", lambda_max)
print("example alpha_sklearn for first lambda:", lambda_abs[0] / (2*(X.shape[0]//2)))

# Now run two weight schemes for alpha_weak=0.5 (compare)
print("\nRunning randomized stability (alpha_weak=0.5) with Uniform weights...")
sel_uniform = stability_path_variant(Xs, y, lambda_abs, n_bootstrap=100,
                                     randomized=True, alpha_weak=0.5,
                                     weight_scheme='uniform', rng_seed=123)
print("X3 selection probs (uniform):", np.round(sel_uniform[2,:], 3))
print("max Π X3 (uniform):", sel_uniform[2,:].max())

print("\nRunning randomized stability (alpha_weak=0.5) with Bernoulli {1,α} weights...")
sel_bernoulli = stability_path_variant(Xs, y, lambda_abs, n_bootstrap=100,
                                       randomized=True, alpha_weak=0.5,
                                       weight_scheme='bernoulli', rng_seed=123)
print("X3 selection probs (bernoulli):", np.round(sel_bernoulli[2,:], 3))
print("max Π X3 (bernoulli):", sel_bernoulli[2,:].max())

# Also give true variables' max for comparison
print("\nTrue vars max Π (uniform):", sel_uniform[0,:].max(), sel_uniform[1,:].max())
print("True vars max Π (bernoulli):", sel_bernoulli[0,:].max(), sel_bernoulli[1,:].max())
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[11], line 71
     68 Sigma_pop = np.cov(X, rowvar=False)     # for our simulation these are same-ish
     70 # Compute population IR (S = [0,1])
---> 71 ic_pop, ic_array = compute_population_ir(Sigma_pop, beta_true, S_idx=[0,1])
     72 print("Population IR max =", ic_pop)
     74 # Build lambda grid relative to standardized Xs

NameError: name 'beta_true' is not defined

The perfect one so far: Recreation of Fig 4¶

In [3]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso
In [ ]:
def generate_figure4_data(n=200, p=200, rho=0.8, seed=42):
    np.random.seed(seed)

    # 1. Independent base features
    X = np.random.randn(n, p)

    # 2. Overwrite X1, X2 as independent N(0,1)
    X1 = np.random.randn(n)
    X2 = np.random.randn(n)

    # 3. Generate X3 as correlated variable
    Z = np.random.randn(n)
    X3 = rho * (X1 + X2) / np.sqrt(2) + np.sqrt(1 - rho**2) * Z

    # Insert back into the design matrix
    X[:,0] = X1
    X[:,1] = X2
    X[:,2] = X3
    # The rest X4..Xp remain independent noise

    # True beta
    beta = np.zeros(p)
    beta[0] = 1
    beta[1] = 1

    # Noise σ = 0.5
    y = X @ beta + 0.5*np.random.randn(n)

    return X, y, beta
In [ ]:
def standardize(X):
    norms = np.linalg.norm(X, axis=0)
    norms[norms == 0] = 1.0
    return X / norms, norms
In [ ]:
def apply_random_weights(X_sub, alpha_weak):
    """
    Two-point Bernoulli randomization:
        W_k = 1        with prob 1/2
        W_k = α_weak   with prob 1/2
    """
    p = X_sub.shape[1]
    W = np.ones(p)
    mask = np.random.rand(p) < 0.5
    W[mask] = alpha_weak
    return X_sub / W[np.newaxis, :]
In [ ]:
def run_lasso_once(X_sub, y_sub, lambda_val):
    n_sub = X_sub.shape[0]
    alpha_sklearn = lambda_val / (2*n_sub)
    model = Lasso(alpha=alpha_sklearn, fit_intercept=False, max_iter=10000)
    model.fit(X_sub, y_sub)
    return (model.coef_ != 0).astype(int)
In [ ]:
def stability_path(Xs, y, lambda_grid, B=100, randomized=False, alpha_weak=1.0):
    n, p = Xs.shape
    m = n // 2   # subsample size
    L = len(lambda_grid)

    selections = np.zeros((B, p, L))

    for b in range(B):
        idx = np.random.choice(n, size=m, replace=False)
        X_sub = Xs[idx]
        y_sub = y[idx]

        if randomized:
            X_sub = apply_random_weights(X_sub, alpha_weak)

        for j, lam in enumerate(lambda_grid):
            selections[b,:,j] = run_lasso_once(X_sub, y_sub, lam)

    return selections.mean(axis=0)
In [ ]:
def make_lambda_grid(Xs, y, n_lambdas=50):
    lambda_max = np.max(np.abs(Xs.T @ y))
    lambda_min = 0.01 * lambda_max
    lambda_grid = np.logspace(np.log10(lambda_max), np.log10(lambda_min), n_lambdas)
    return lambda_grid
In [ ]:
# 1. Generate data
X, y, beta_true = generate_figure4_data()

# 2. Standardize X
Xs, norms = standardize(X)

# 3. Lambda grid
lambda_grid = make_lambda_grid(Xs, y)

# 4. Stability paths
print("Running α = 1.0 (standard LASSO)...")
Pi_a1 = stability_path(Xs, y, lambda_grid, B=100, randomized=False)

print("Running α = 0.5 (moderate randomization)...")
Pi_a05 = stability_path(Xs, y, lambda_grid, B=100, randomized=True, alpha_weak=0.5)

print("Running α = 0.2 (strong randomization)...")
Pi_a02 = stability_path(Xs, y, lambda_grid, B=100, randomized=True, alpha_weak=0.2)
Running α = 1.0 (standard LASSO)...
Running α = 0.5 (moderate randomization)...
Running α = 0.2 (strong randomization)...
In [ ]:
fig, axes = plt.subplots(1, 3, figsize=(18, 5))
paths = [Pi_a1, Pi_a05, Pi_a02]
alphas = [1.0, 0.5, 0.2]
labels = ['(a)', '(b)', '(c)']

for ax, Pi, alpha, lab in zip(axes, paths, alphas, labels):
    
    # TRUE VARIABLES (X1,X2)
    ax.plot(lambda_grid, Pi[0], color='darkred', linewidth=2)
    ax.plot(lambda_grid, Pi[1], color='darkred', linewidth=2)

    # CORRELATED FALSE VAR (X3)
    ax.plot(lambda_grid, Pi[2], color='navy', linewidth=2, linestyle='--')

    # OTHER IRRELEVANT VARS
    for k in range(3, Pi.shape[0]):
        ax.plot(lambda_grid, Pi[k], color='black', alpha=0.3, linewidth=0.3)

    ax.set_xscale('log')
    ax.set_xlim(lambda_grid[-1], lambda_grid[0])
    ax.invert_xaxis()

    ax.set_ylim(0, 1.05)
    ax.set_xlabel("λ", fontsize=12)
    ax.set_ylabel("Π", fontsize=12)
    ax.set_title(f"{lab} α = {alpha}", fontsize=14)

plt.tight_layout()
plt.show()
No description has been provided for this image

Apparently there is some problem in the code above¶

In [4]:
def generate_figure4_data_paper(n=200, p=200, rho=0.8, seed=42):
    np.random.seed(seed)

    # Build covariance matrix Sigma EXACTLY as in the paper
    Sigma = np.eye(p)
    Sigma[0,2] = Sigma[2,0] = rho
    Sigma[1,2] = Sigma[2,1] = rho
    
    # Draw X from N(0, Σ)
    X = np.random.multivariate_normal(
        mean=np.zeros(p),
        cov=Sigma,
        size=n
    )

    # True coefficients
    beta = np.zeros(p)
    beta[0] = 1
    beta[1] = 1

    # Noise variance 1/4  →  sd = 1/2  (matching paper)
    eps = 0.5 * np.random.randn(n)

    y = X @ beta + eps

    return X, y, beta

def standardize(X):
    norms = np.linalg.norm(X, axis=0)
    norms[norms == 0] = 1.0
    return X / norms, norms

def apply_random_weights(X_sub, alpha_weak):
    """
    Two-point Bernoulli randomization:
        W_k = 1        with prob 1/2
        W_k = α_weak   with prob 1/2
    """
    p = X_sub.shape[1]
    W = np.ones(p)
    mask = np.random.rand(p) < 0.5
    W[mask] = alpha_weak
    return X_sub / W[np.newaxis, :]

def run_lasso_once(X_sub, y_sub, lambda_val):
    n_sub = X_sub.shape[0]
    alpha_sklearn = lambda_val / (2*n_sub)
    model = Lasso(alpha=alpha_sklearn, fit_intercept=False, max_iter=10000)
    model.fit(X_sub, y_sub)
    return (model.coef_ != 0).astype(int)

def stability_path(Xs, y, lambda_grid, B=100, randomized=False, alpha_weak=1.0):
    n, p = Xs.shape
    m = n // 2   # subsample size
    L = len(lambda_grid)

    selections = np.zeros((B, p, L))

    for b in range(B):
        idx = np.random.choice(n, size=m, replace=False)
        X_sub = Xs[idx]
        y_sub = y[idx]

        if randomized:
            X_sub = apply_random_weights(X_sub, alpha_weak)

        for j, lam in enumerate(lambda_grid):
            selections[b,:,j] = run_lasso_once(X_sub, y_sub, lam)

    return selections.mean(axis=0)

def make_lambda_grid(Xs, y, n_lambdas=50):
    lambda_max = np.max(np.abs(Xs.T @ y))
    lambda_min = 0.01 * lambda_max
    lambda_grid = np.logspace(np.log10(lambda_max), np.log10(lambda_min), n_lambdas)
    return lambda_grid

# 1. Generate data
X, y, beta_true = generate_figure4_data_paper()

# 2. Standardize X
Xs, norms = standardize(X)

# 3. Lambda grid
lambda_grid = make_lambda_grid(Xs, y)

# 4. Stability paths
print("Running α = 1.0 (standard LASSO)...")
Pi_a1 = stability_path(Xs, y, lambda_grid, B=100, randomized=False)

print("Running α = 0.5 (moderate randomization)...")
Pi_a05 = stability_path(Xs, y, lambda_grid, B=100, randomized=True, alpha_weak=0.5)

print("Running α = 0.2 (strong randomization)...")
Pi_a02 = stability_path(Xs, y, lambda_grid, B=100, randomized=True, alpha_weak=0.2)

fig, axes = plt.subplots(1, 3, figsize=(18, 5))
paths = [Pi_a1, Pi_a05, Pi_a02]
alphas = [1.0, 0.5, 0.2]
labels = ['(a)', '(b)', '(c)']

for ax, Pi, alpha, lab in zip(axes, paths, alphas, labels):
    
    # TRUE VARIABLES (X1,X2)
    ax.plot(lambda_grid, Pi[0], color='darkred', linewidth=2)
    ax.plot(lambda_grid, Pi[1], color='darkred', linewidth=2)

    # CORRELATED FALSE VAR (X3)
    ax.plot(lambda_grid, Pi[2], color='navy', linewidth=2, linestyle='--')

    # OTHER IRRELEVANT VARS
    for k in range(3, Pi.shape[0]):
        ax.plot(lambda_grid, Pi[k], color='black', alpha=0.3, linewidth=0.3)

    ax.set_xscale('log')
    ax.set_xlim(lambda_grid[-1], lambda_grid[0])
    ax.invert_xaxis()

    ax.set_ylim(0, 1.05)
    ax.set_xlabel("λ", fontsize=12)
    ax.set_ylabel("Π", fontsize=12)
    ax.set_title(f"{lab} α = {alpha}", fontsize=14)

plt.tight_layout()
plt.show()


    
/tmp/ipykernel_969122/2102222907.py:10: RuntimeWarning: covariance is not symmetric positive-semidefinite.
  X = np.random.multivariate_normal(
Running α = 1.0 (standard LASSO)...
Running α = 0.5 (moderate randomization)...
Running α = 0.2 (strong randomization)...
No description has been provided for this image

Good one¶

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso

# Step 1: Generate data
def generate_data(n=200, p=200, rho=0.8, seed=42):
    np.random.seed(seed)
    Sigma = np.eye(p)
    Sigma[0, 2] = Sigma[2, 0] = rho
    Sigma[1, 2] = Sigma[2, 1] = rho
    X = np.random.multivariate_normal(np.zeros(p), Sigma, size=n)
    beta = np.zeros(p)
    beta[0] = beta[1] = 1.0
    y = X @ beta + 0.5 * np.random.randn(n)
    return X, y

# Step 2: Stability selection
def stability_selection(X, y, lambda_grid, n_bootstrap=100, randomized=False, alpha_weak=1.0):
    n, p = X.shape
    subsample_size = n // 2
    selections = np.zeros((n_bootstrap, p, len(lambda_grid)))

    for b in range(n_bootstrap):
        np.random.seed(b + 42)
        idx = np.random.choice(n, subsample_size, replace=False)
        X_sub, y_sub = X[idx], y[idx]

        if randomized:
            weights = np.random.uniform(alpha_weak, 1.0, p)
            X_sub = X_sub / weights[np.newaxis, :]

        for l_idx, lam in enumerate(lambda_grid):
            model = Lasso(alpha=lam, fit_intercept=False, max_iter=5000)
            model.fit(X_sub, y_sub)
            selections[b, :, l_idx] = (model.coef_ != 0).astype(int)

    return selections.mean(axis=0)

# Step 3: Setup
X, y = generate_data()
lambda_max = np.max(np.abs(X.T @ y)) / X.shape[0]
lambda_grid = np.logspace(np.log10(lambda_max / 100), np.log10(lambda_max), 50)[::-1]

# Step 4: Run stability selection
sel_probs_1 = stability_selection(X, y, lambda_grid, randomized=False)
sel_probs_05 = stability_selection(X, y, lambda_grid, randomized=True, alpha_weak=0.5)
sel_probs_02 = stability_selection(X, y, lambda_grid, randomized=True, alpha_weak=0.2)

# Step 5: Plot
fig, axes = plt.subplots(1, 3, figsize=(18, 5))
for ax, sel_probs, alpha, label in zip(
    axes,
    [sel_probs_1, sel_probs_05, sel_probs_02],
    [1.0, 0.5, 0.2],
    ['(a)', '(b)', '(c)']
):
    ax.plot(lambda_grid, sel_probs[0], color='red', linewidth=2)       # X1
    ax.plot(lambda_grid, sel_probs[1], color='red', linewidth=2)       # X2
    ax.plot(lambda_grid, sel_probs[2], color='blue', linewidth=2, linestyle='--')  # X3
    for j in range(3, sel_probs.shape[0]):
        ax.plot(lambda_grid, sel_probs[j], color='black', linewidth=0.3, alpha=0.2, linestyle=':')
    ax.set_xlim(1, lambda_grid.min())
    ax.set_ylim(0, 1.05)
    ax.set_xlabel('λ', fontsize=12)
    ax.set_ylabel('Π', fontsize=12)
    ax.set_title(f'{label} α = {alpha}', fontsize=13)
    ax.tick_params(axis='both', labelsize=11)

plt.tight_layout()
plt.show()
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
Cell In[21], line 47
     45 sel_probs_1 = stability_selection(X, y, lambda_grid, randomized=False)
     46 sel_probs_05 = stability_selection(X, y, lambda_grid, randomized=True, alpha_weak=0.5)
---> 47 sel_probs_02 = stability_selection(X, y, lambda_grid, randomized=True, alpha_weak=0.2)
     49 # Step 5: Plot
     50 fig, axes = plt.subplots(1, 3, figsize=(18, 5))

Cell In[21], line 34, in stability_selection(X, y, lambda_grid, n_bootstrap, randomized, alpha_weak)
     32     for l_idx, lam in enumerate(lambda_grid):
     33         model = Lasso(alpha=lam, fit_intercept=False, max_iter=5000)
---> 34         model.fit(X_sub, y_sub)
     35         selections[b, :, l_idx] = (model.coef_ != 0).astype(int)
     37 return selections.mean(axis=0)

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/base.py:1365, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs)
   1358     estimator._validate_params()
   1360 with config_context(
   1361     skip_parameter_validation=(
   1362         prefer_skip_nested_validation or global_skip_validation
   1363     )
   1364 ):
-> 1365     return fit_method(estimator, *args, **kwargs)

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:1048, in ElasticNet.fit(self, X, y, sample_weight, check_input)
   1039     # Note: Alternatively, we could also have rescaled alpha instead
   1040     # of sample_weight:
   1041     #
   (...)
   1045 # X and y will be rescaled if sample_weight is not None, order='F'
   1046 # ensures that the returned X and y are still F-contiguous.
   1047 should_copy = self.copy_X and not X_copied
-> 1048 X, y, X_offset, y_offset, X_scale, precompute, Xy = _pre_fit(
   1049     X,
   1050     y,
   1051     None,
   1052     self.precompute,
   1053     fit_intercept=self.fit_intercept,
   1054     copy=should_copy,
   1055     check_input=check_input,
   1056     sample_weight=sample_weight,
   1057 )
   1058 # coordinate descent needs F-ordered arrays and _pre_fit might have
   1059 # called _rescale_data
   1060 if check_input or sample_weight is not None:

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_base.py:809, in _pre_fit(X, y, Xy, precompute, fit_intercept, copy, check_input, sample_weight)
    799     X, y, X_offset, y_offset, X_scale = _preprocess_data(
    800         X,
    801         y,
   (...)
    805         sample_weight=sample_weight,
    806     )
    807 else:
    808     # copy was done in fit if necessary
--> 809     X, y, X_offset, y_offset, X_scale = _preprocess_data(
    810         X,
    811         y,
    812         fit_intercept=fit_intercept,
    813         copy=copy,
    814         check_input=check_input,
    815         sample_weight=sample_weight,
    816     )
    817     # Rescale only in dense case. Sparse cd solver directly deals with
    818     # sample_weight.
    819     if sample_weight is not None:
    820         # This triggers copies anyway.

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_base.py:193, in _preprocess_data(X, y, fit_intercept, copy, copy_y, sample_weight, check_input)
    191     y -= y_offset
    192 else:
--> 193     X_offset = xp.zeros(n_features, dtype=X.dtype, device=device_)
    194     if y.ndim == 1:
    195         y_offset = xp.asarray(0.0, dtype=dtype_, device=device_)

File ~/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/externals/array_api_compat/_internal.py:32, in get_xp.<locals>.inner.<locals>.wrapped_f(*args, **kwargs)
     31 def inner(f: Callable[..., _T], /) -> Callable[..., _T]:
---> 32     @wraps(f)
     33     def wrapped_f(*args: object, **kwargs: object) -> object:
     34         return f(*args, xp=xp, **kwargs)
     36     sig = signature(f)

KeyboardInterrupt: 

You should know everything about this graph¶

Extending¶

For a concrete extension just like we discussed in our meeting. We use the same dimensions, $n=p=200$ but now

  • The number of true variables are 5
  • Give the true variables unequal variances i.e. the (diagonal entries) of the matrix are different
  • Small correlated variables (3-5) with
    • Put small within-S correlations $\rho_s$ and create 2 distractors (variables that are correlated with some signals) with correlation parameter $\tau$
  • Use two-point randomization weights $W \in \{1, \alpha\} \text{ with prob } 1/2 (\alpha=0.5 \text{ and } \alpha=0.2)$
  • Use $B=100$ subsamples
In [ ]:
import numpy as np

def population_ir(Sigma, beta_S_idx):
    """
    Sigma: p x p covariance matrix
    beta_S_idx: list/array of indices for true signals S
    returns: (max_q, q_array) where q_array are values for all noise indices
    """
    S = np.array(beta_S_idx)
    p = Sigma.shape[0]
    N = [i for i in range(p) if i not in S]
    Sigma_SS = Sigma[np.ix_(S,S)]
    inv_SS = np.linalg.inv(Sigma_SS)
    sgn = np.ones(len(S))  # assume positive coefficients; if general, pass signs
    q_vals = []
    for k in N:
        Sigma_Sk = Sigma[np.ix_(S,[k])].reshape(-1)
        q = abs(sgn.dot(inv_SS.dot(Sigma_Sk)))
        q_vals.append(q)
    return max(q_vals), np.array(q_vals)
In [ ]:
def build_sigma_extended(p, S_idx, variances, rho_withinS=0.2, distractors=None):
    """
    p: total variables
    S_idx: list of signal indices (length s)
    variances: length-p array of variances (diagonal)
    rho_withinS: correlation inside S block (equicorr)
    distractors: list of tuples (k, signal_list, tau_k) meaning var k has correlation tau_k with listed signal indices
    Returns Sigma (p x p)
    """
    Sigma = np.zeros((p,p))
    for i in range(p):
        Sigma[i,i] = variances[i]
    # within-S equicorrelation (scaled by sqrt variances)
    S = list(S_idx)
    for i in S:
        for j in S:
            if i != j:
                Sigma[i,j] = rho_withinS * np.sqrt(variances[i]*variances[j])
    # distractor correlations
    if distractors is not None:
        for (k, sig_list, tau) in distractors:
            for s in sig_list:
                Sigma[k, s] = tau * np.sqrt(variances[k]*variances[s])
                Sigma[s, k] = Sigma[k, s]
    # small ridge for PD
    Sigma += 1e-8 * np.eye(p)
    return Sigma
In [ ]:
# Example parameters
p = 200
S_idx = [0,1,2,3]
variances = np.ones(p)
variances[0]=1.0; variances[1]=1.5; variances[2]=0.8; variances[3]=1.2
distractors = [(4, [0,1], None), (5, [2,3], None)]  # we'll fill taus later

rho_grid = np.linspace(0.0, 0.9, 19)
tau_grid = np.linspace(0.0, 0.9, 19)

pop_ir_map = np.zeros((len(rho_grid), len(tau_grid)))
for i, rho in enumerate(rho_grid):
    for j, tau in enumerate(tau_grid):
        # build Sigma with within S correlation rho and distractor tau
        diss = [(4, [0,1], tau), (5, [0,1], tau/2.0)]
        Sigma = build_sigma_extended(p, S_idx, variances, rho_withinS=rho, distractors=diss)
        max_q, q_vals = population_ir(Sigma, S_idx)
        pop_ir_map[i,j] = max_q

# Plot contour pop_ir_map and contour level 1 (IR boundary)
In [ ]:
# Extended experiments: population IR + sample IR + stability selection vs Lasso baseline
# Copy-paste this whole block into a notebook cell and run.

import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso, LassoCV
import time
import pandas as pd

# -------------------------
# Utilities: build Sigma
# -------------------------
def build_extended_sigma(p=200, true_idx=None, s_true=7,
                         signal_variances=None, rho_withinS=0.0,
                         distractor_specs=None, ridge=1e-10, rng=None):
    """
    Build covariance Sigma (p x p).
    - true_idx: list of indices of true variables (if None, uses 0..s_true-1)
    - signal_variances: length-p array of variances or None (defaults: 1 for non-signal, sampled for signals)
    - rho_withinS: equicorrelation among signals (applied scaled by sqrt(var_i var_j))
    - distractor_specs: list of (k_index, list_of_signal_indices, tau_k)
    """
    if rng is None:
        rng = np.random.RandomState(123)
    if true_idx is None:
        true_idx = list(range(s_true))
    else:
        s_true = len(true_idx)
    # default variances
    if signal_variances is None:
        variances = np.ones(p)
        sampled = rng.uniform(0.7, 1.6, size=s_true)
        for i, idx in enumerate(true_idx):
            variances[idx] = float(sampled[i])
    else:
        variances = np.array(signal_variances, dtype=float)
    Sigma = np.zeros((p,p), dtype=float)
    for i in range(p):
        Sigma[i,i] = variances[i]
    # within-S equicorr
    if rho_withinS != 0.0:
        for i in true_idx:
            for j in true_idx:
                if i != j:
                    Sigma[i,j] = rho_withinS * np.sqrt(variances[i]*variances[j])
    # distractor correlations
    if distractor_specs is not None:
        for (k, sig_list, tau_k) in distractor_specs:
            for s in sig_list:
                Sigma[k,s] = tau_k * np.sqrt(variances[k]*variances[s])
                Sigma[s,k] = Sigma[k,s]
    Sigma += ridge * np.eye(p)
    #symmetrize & clip tiny negative eigs for numerical safety
    Sigma = (Sigma + Sigma.T)/2.0
    eigs, vecs = np.linalg.eigh(Sigma)
    if np.min(eigs) < -1e-12:
        # clip negatives to tiny positive
        eigs_clipped = np.clip(eigs, 1e-12, None)
        Sigma = vecs @ np.diag(eigs_clipped) @ vecs.T
    return Sigma, true_idx

# -------------------------
# Population IR computation
# -------------------------
def population_ir(Sigma, true_idx, sign_beta=None):
    S = np.array(true_idx)
    p = Sigma.shape[0]
    N = [i for i in range(p) if i not in S]
    Sigma_SS = Sigma[np.ix_(S,S)]
    invSS = np.linalg.inv(Sigma_SS)
    if sign_beta is None:
        sign_beta = np.ones(len(S))
    sign_beta = np.array(sign_beta)
    q_vals = []
    for k in N:
        Sigma_Sk = Sigma[np.ix_(S,[k])].reshape(-1)
        q = abs(sign_beta.dot(invSS.dot(Sigma_Sk)))
        q_vals.append(q)
    return max(q_vals), np.array(q_vals), N

# -------------------------
# Sample X from Sigma (SVD/eig-based)
# -------------------------
def sample_X_from_sigma(n, Sigma, rng=None):
    if rng is None:
        rng = np.random.RandomState(0)
    eigvals, eigvecs = np.linalg.eigh(Sigma)
    eigvals_clipped = np.clip(eigvals, 0, None)
    Sroot = eigvecs @ np.diag(np.sqrt(eigvals_clipped)) @ eigvecs.T
    Z = rng.randn(n, Sigma.shape[0])
    X = Z @ Sroot
    return X

# -------------------------
# Standardize columns for Lasso (unit L2 norm)
# -------------------------
def standardize_cols(X):
    norms = np.linalg.norm(X, axis=0)
    norms[norms==0] = 1.0
    Xs = X / norms[np.newaxis, :]
    return Xs, norms

# -------------------------
# Make lambda grid (relative to standardized X and y)
# -------------------------
def make_lambda_grid(Xs, y, n_lambdas=50):
    lambda_max = np.max(np.abs(Xs.T @ y))
    lambda_min = 0.01 * lambda_max
    # logspace from lambda_max -> lambda_min (as in glmnet)
    lambda_grid = np.logspace(np.log10(lambda_max), np.log10(lambda_min), n_lambdas)
    return lambda_grid, lambda_max

# -------------------------
# Bernoulli two-point randomization apply
# -------------------------
def apply_random_weights(X_sub, alpha_weak, rng=None):
    if rng is None:
        rng = np.random
    p = X_sub.shape[1]
    W = np.ones(p)
    mask = rng.rand(p) < 0.5   # prob 1/2
    W[mask] = alpha_weak
    return X_sub / W[np.newaxis, :]

# -------------------------
# Lasso fit single (sklearn scaling)
# -------------------------
from sklearn.linear_model import Lasso

def lasso_select_one_scaled(X_sub, y_sub, lambda_abs, randomized=False, alpha_weak=1.0, rng=None):
    # X_sub must be standardized columns (unit L2 norm) before calling this
    if rng is None:
        rng = np.random
    X_use = X_sub
    if randomized:
        X_use = apply_random_weights(X_sub, alpha_weak, rng=rng)
    n_sub = X_use.shape[0]
    alpha_sklearn = lambda_abs / (2.0 * n_sub)
    model = Lasso(alpha=alpha_sklearn, fit_intercept=False, max_iter=10000, tol=1e-6)
    model.fit(X_use, y_sub)
    return (model.coef_ != 0).astype(int)

# -------------------------
# Stability selection path
# -------------------------
def stability_path(Xs, y, lambda_abs, B=100, randomized=False, alpha_weak=1.0, rng_seed=0):
    rng = np.random.RandomState(rng_seed)
    n, p = Xs.shape
    subsz = n//2
    L = len(lambda_abs)
    selections = np.zeros((B, p, L))
    for b in range(B):
        idx = rng.choice(n, size=subsz, replace=False)
        X_sub = Xs[idx, :]
        y_sub = y[idx]
        for j, lam in enumerate(lambda_abs):
            selections[b, :, j] = lasso_select_one_scaled(X_sub, y_sub, lam,
                                                         randomized=randomized, alpha_weak=alpha_weak, rng=rng)
        if (b+1) % 50 == 0:
            print(f"  Completed {b+1}/{B} subsamples")
    sel_probs = selections.mean(axis=0)
    return sel_probs

# -------------------------
# Baseline Lasso using LassoCV (sklearn)
# -------------------------
from sklearn.linear_model import LassoCV

def baseline_lasso_cv(Xs, y, cv=5, n_alphas=50, rng_seed=0):
    # LassoCV uses sklearn alpha definition; it will compute proper alpha_ in sklearn scale.
    rng = np.random.RandomState(rng_seed)
    model = LassoCV(cv=cv, fit_intercept=False, n_alphas=n_alphas, random_state=rng)
    model.fit(Xs, y)
    # model.alpha_ is sklearn alpha; coefficients:
    coefs = model.coef_.copy()
    return coefs, model

# -------------------------
# Evaluate support recovery metrics
# -------------------------
def support_metrics(selected_idx, true_idx, p):
    S = set(true_idx)
    Sel = set(selected_idx)
    tp = len(S & Sel)
    fp = len(Sel - S)
    fn = len(S - Sel)
    precision = tp / (tp + fp) if (tp + fp) > 0 else 0.0
    recall = tp / (tp + fn) if (tp + fn) > 0 else 0.0
    f1 = 2 * precision * recall / (precision + recall) if (precision + recall) > 0 else 0.0
    return {'tp':tp, 'fp':fp, 'fn':fn, 'precision':precision, 'recall':recall, 'f1':f1}

# -------------------------
# Driver: experiment for a single Sigma / parameter point
# -------------------------
def experiment_once(Sigma, true_idx, n=200, B=100, alphas_random=[1.0,0.5,0.2],
                    pi_threshold=0.75, rng_seed=0, do_plot=False):
    rng = np.random.RandomState(rng_seed)
    # 1) population IR
    pop_qmax, pop_qvals, noise_idx = population_ir(Sigma, true_idx)
    # 2) sample dataset
    X = sample_X_from_sigma(n, Sigma, rng=rng)
    # build beta_true for generating y (we assume beta signs positive and magnitudes 1)
    p = Sigma.shape[0]
    beta_true = np.zeros(p)
    for idx in true_idx:
        beta_true[idx] = 1.0
    y = X @ beta_true + 0.5 * rng.randn(n)   # noise sigma=0.5
    # 3) standardize cols
    Xs, norms = standardize_cols(X)
    # 4) lambda grid (abs values)
    lambda_grid, lambda_max = make_lambda_grid(Xs, y, n_lambdas=50)
    lambda_abs = lambda_grid
    # 5) stability selection for each alpha (randomized)
    results = {}
    for alpha_weak in alphas_random:
        randomized = (alpha_weak < 1.0)
        print(f"\nStability selection: alpha_weak={alpha_weak}, randomized={randomized}")
        sel_probs = stability_path(Xs, y, lambda_abs, B=B, randomized=randomized, alpha_weak=alpha_weak, rng_seed=rng_seed)
        # max over lambdas
        max_pi = sel_probs.max(axis=1)  # shape p
        # selected by threshold
        selected_idx = list(np.where(max_pi >= pi_threshold)[0])
        # metrics
        metrics = support_metrics(selected_idx, true_idx, p)
        results[f"stab_alpha_{alpha_weak}"] = {'sel_probs':sel_probs, 'max_pi':max_pi,
                                              'selected_idx':selected_idx, 'metrics':metrics}
        print(f"  max Π (signals): {[max_pi[i] for i in true_idx]}")
        print(f"  max Π (top distractors): {sorted([(i,max_pi[i]) for i in range(p) if i not in true_idx], key=lambda x:-x[1])[:5]}")
        print(f"  support metrics: {metrics}")
    # 6) baseline LassoCV
    coefs, model = baseline_lasso_cv(Xs, y, cv=5, n_alphas=50, rng_seed=rng_seed)
    selected_lasso = list(np.where(coefs != 0)[0])
    metrics_lasso = support_metrics(selected_lasso, true_idx, p)
    print("\nBaseline LassoCV metrics:", metrics_lasso)
    # 7) sample IR (compute sample hat q_k using sample covariance)
    Sigma_sample = np.cov(X, rowvar=False)
    samp_qmax, samp_qvals, _ = population_ir(Sigma_sample, true_idx)
    # package outputs
    out = {
        'pop_qmax': pop_qmax,
        'samp_qmax': samp_qmax,
        'results': results,
        'lasso': {'coefs':coefs, 'selected':selected_lasso, 'metrics':metrics_lasso},
        'X': X, 'y': y, 'Sigma_sample': Sigma_sample, 'lambda_grid': lambda_grid
    }
    # optional small plot of max_pi for signals/distractors
    if do_plot:
        fig, ax = plt.subplots(1,1, figsize=(6,4))
        for idx in true_idx:
            ax.plot(lambda_grid, results[f'stab_alpha_{alphas_random[0]}']['sel_probs'][idx,:], label=f"signal {idx}")
        ax.set_xscale('log'); ax.invert_xaxis(); ax.set_xlabel('λ'); ax.set_ylabel('Π')
        ax.legend(); plt.show()
    return out

# -------------------------
# Example: set up extended Sigma with user-specified numbers
# -------------------------
# USER CONFIG: change these as you like
p = 200
s_true = 7                           # choose 5-10
true_idx = list(range(s_true))
# set unequal variances for the first s_true signals (others default 1.0)
signal_variances = np.ones(p)
vals = [1.2, 1.5, 0.8, 1.1, 0.9, 1.3, 0.75]  # length s_true
for i, idx in enumerate(true_idx):
    signal_variances[idx] = vals[i]
# define distractors: (index, signals_it_relates_to, base tau)
distractor_specs = [
    (30, [0,1,2], 0.6),
    (31, [2,3], 0.5),
    (32, [0,1,2,3,4,5,6], 0.7),
    (33, [4,5], 0.45)
]
rho_withinS = 0.05   # small within-signal correlation baseline

# Build Sigma
Sigma, _ = build_extended_sigma(p=p, true_idx=true_idx, s_true=s_true,
                                signal_variances=signal_variances,
                                rho_withinS=rho_withinS,
                                distractor_specs=distractor_specs,
                                ridge=1e-8, rng=np.random.RandomState(123))

# Quick check: population IR
pop_qmax, pop_qvals, noise_idx = population_ir(Sigma, true_idx)
print("Population IR max (initial config) =", pop_qmax)

# -------------------------
# Run experiment for this single parameter point (fast test)
# -------------------------
t0 = time.time()
res = experiment_once(Sigma, true_idx, n=200, B=100, alphas_random=[1.0, 0.5, 0.2],
                      pi_threshold=0.75, rng_seed=42, do_plot=False)
t1 = time.time()
print(f"\nSingle experiment took {(t1-t0):.1f}s")
# inspect results summary
print("\nPopulation IR max:", res['pop_qmax'])
print("Sample IR max:", res['samp_qmax'])
for k, v in res['results'].items():
    print(k, "metrics:", v['metrics'])
print("LassoCV metrics:", res['lasso']['metrics'])
Population IR max (initial config) = 2.224889814797082

Stability selection: alpha_weak=1.0, randomized=False
  Completed 50/100 subsamples
  Completed 100/100 subsamples
  max Π (signals): [np.float64(0.06), np.float64(0.49), np.float64(0.52), np.float64(0.44), np.float64(0.8), np.float64(0.64), np.float64(0.99)]
  max Π (top distractors): [(30, np.float64(1.0)), (31, np.float64(1.0)), (32, np.float64(1.0)), (33, np.float64(1.0)), (23, np.float64(0.75))]
  support metrics: {'tp': 2, 'fp': 5, 'fn': 5, 'precision': 0.2857142857142857, 'recall': 0.2857142857142857, 'f1': 0.2857142857142857}

Stability selection: alpha_weak=0.5, randomized=True
  Completed 50/100 subsamples
  Completed 100/100 subsamples
  max Π (signals): [np.float64(0.4), np.float64(0.55), np.float64(0.57), np.float64(0.53), np.float64(0.78), np.float64(0.66), np.float64(0.96)]
  max Π (top distractors): [(32, np.float64(1.0)), (31, np.float64(0.99)), (33, np.float64(0.99)), (30, np.float64(0.98)), (163, np.float64(0.75))]
  support metrics: {'tp': 2, 'fp': 5, 'fn': 5, 'precision': 0.2857142857142857, 'recall': 0.2857142857142857, 'f1': 0.2857142857142857}

Stability selection: alpha_weak=0.2, randomized=True
  Completed 50/100 subsamples
  Completed 100/100 subsamples
  max Π (signals): [np.float64(0.63), np.float64(0.67), np.float64(0.68), np.float64(0.66), np.float64(0.73), np.float64(0.72), np.float64(0.8)]
  max Π (top distractors): [(32, np.float64(0.96)), (31, np.float64(0.89)), (30, np.float64(0.88)), (33, np.float64(0.87)), (23, np.float64(0.65))]
  support metrics: {'tp': 1, 'fp': 4, 'fn': 6, 'precision': 0.2, 'recall': 0.14285714285714285, 'f1': 0.16666666666666666}

Baseline LassoCV metrics: {'tp': 3, 'fp': 20, 'fn': 4, 'precision': 0.13043478260869565, 'recall': 0.42857142857142855, 'f1': 0.2}

Single experiment took 22.4s

Population IR max: 2.224889814797082
Sample IR max: 2.2349691845666877
stab_alpha_1.0 metrics: {'tp': 2, 'fp': 5, 'fn': 5, 'precision': 0.2857142857142857, 'recall': 0.2857142857142857, 'f1': 0.2857142857142857}
stab_alpha_0.5 metrics: {'tp': 2, 'fp': 5, 'fn': 5, 'precision': 0.2857142857142857, 'recall': 0.2857142857142857, 'f1': 0.2857142857142857}
stab_alpha_0.2 metrics: {'tp': 1, 'fp': 4, 'fn': 6, 'precision': 0.2, 'recall': 0.14285714285714285, 'f1': 0.16666666666666666}
LassoCV metrics: {'tp': 3, 'fp': 20, 'fn': 4, 'precision': 0.13043478260869565, 'recall': 0.42857142857142855, 'f1': 0.2}
In [ ]:
def plot_extended_figure4(sel_probs_alpha1, sel_probs_alpha05, sel_probs_alpha02,
                           lambda_grid, true_idx, distractor_idx, noise_idx,
                           alpha_values=[1.0, 0.5, 0.2]):
    """
    Produce an extended version of Figure 4:
    - true variables (red solid)
    - correlated distractors (blue dashed)
    - noise variables (gray dotted)
    For alpha ∈ {1.0, 0.5, 0.2}.
    """

    sel_list = [sel_probs_alpha1, sel_probs_alpha05, sel_probs_alpha02]
    labels = ["(a) α = 1.0", "(b) α = 0.5", "(c) α = 0.2"]

    fig, axes = plt.subplots(1, 3, figsize=(18, 5))

    for ax, sel_probs, label in zip(axes, sel_list, labels):

        # ---- TRUE VARIABLES ---------------
        for j in true_idx:
            ax.plot(lambda_grid, sel_probs[j, :],
                    color="darkred", linewidth=2)

        # ---- DISTRACTORS ------------------
        for j in distractor_idx:
            ax.plot(lambda_grid, sel_probs[j, :],
                    color="navy", linestyle="--", linewidth=1.5)

        # ---- NOISE VARIABLES --------------
        for j in noise_idx:
            ax.plot(lambda_grid, sel_probs[j, :],
                    color="gray", linestyle=":", linewidth=0.4, alpha=0.5)

        # ---- Formatting --------------------
        ax.set_xlim(lambda_grid.max(), lambda_grid.min())  # reverse λ axis
        ax.set_ylim(0, 1.02)
        ax.set_xlabel("λ")
        ax.set_ylabel("Π")
        ax.set_title(label)
        ax.grid(False)

    plt.tight_layout()
    plt.show()


plot_extended_figure4(
    sel_probs_alpha1,
    sel_probs_alpha05,
    sel_probs_alpha02,
    lambda_grid,
    true_idx=true_idx,
    distractor_idx=[30,31,32,33],   # or whatever you chose earlier
    noise_idx=[j for j in range(p) if j not in true_idx and j not in [30,31,32,33]]
)
No description has been provided for this image

trying new extension¶

In [ ]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import Lasso
from scipy import linalg
import warnings
warnings.filterwarnings('ignore')

np.random.seed(42)
In [ ]:
def generate_sophisticated_data(n=200, p=200, n_true=5, n_correlated=3, 
                                rho=0.8, within_S_corr=0.0, ridge=0.01):
    """
    Generate data with sophisticated correlation structure.
    DESIGNED TO REPLICATE FIGURE 4 BEHAVIOR: strong true signals, weak correlated signals
    
    Parameters:
    -----------
    n : int
        Number of samples
    p : int
        Number of variables
    n_true : int
        Number of truly important variables (5-10)
    n_correlated : int
        Number of correlated false positives (3-5)
    rho : float
        Correlation strength between true and false variables
    within_S_corr : float
        Small correlation within true variables (0 = independent)
    ridge : float
        Small ridge to ensure numerical stability
    
    Returns:
    --------
    X : np.ndarray (n, p)
        Design matrix
    y : np.ndarray (n,)
        Response vector
    true_indices : np.ndarray
        Indices of true variables
    corr_indices : np.ndarray
        Indices of correlated false variables
    correlation_map : dict
        Maps each correlated variable to which true variables it's correlated with
    """
    
    print("=" * 80)
    print("DATA GENERATION - FIGURE 4 STYLE")
    print("=" * 80)
    
    # Define variable groups
    true_indices = np.arange(n_true)
    corr_indices = np.arange(n_true, n_true + n_correlated)
    noise_indices = np.arange(n_true + n_correlated, p)
    
    print(f"\nVariable groups:")
    print(f"  True variables: {list(true_indices)} (indices 0-{n_true-1})")
    print(f"  Correlated false: {list(corr_indices)} (indices {n_true}-{n_true+n_correlated-1})")
    print(f"  Noise: {len(noise_indices)} variables")
    
    # Start with identity
    Sigma = np.eye(p)
    
    # 1. Unequal variances for true variables (small variation)
    print(f"\n1. Setting variances for true variables...")
    var_range = (0.8, 1.2)  # Smaller range for more stability
    true_variances = np.linspace(var_range[0], var_range[1], n_true)
    for i, idx in enumerate(true_indices):
        Sigma[idx, idx] = true_variances[i]
    print(f"   True variable variances: {true_variances.round(2)}")
    
    # 2. Very small or zero within-S correlation
    if within_S_corr > 0:
        print(f"\n2. Adding small within-true-variables correlation ({within_S_corr})...")
        for i in true_indices:
            for j in true_indices:
                if i != j:
                    Sigma[i, j] = Sigma[j, i] = within_S_corr * np.sqrt(Sigma[i, i] * Sigma[j, j])
    else:
        print(f"\n2. True variables are independent (within_S_corr = 0)")
    
    # 3. Each correlated false variable is correlated with ALL or most true variables
    # This is key to creating the Figure 4 effect!
    print(f"\n3. Creating correlation structure for false positives (ρ={rho})...")
    correlation_map = {}
    
    for corr_idx in corr_indices:
        # Each correlated variable is associated with ALL or most true variables
        # This makes them look important until randomization breaks the symmetry
        if n_true <= 3:
            associated_true = true_indices  # All of them
        else:
            # Pick 3-4 random ones, or all if n_true is small
            n_associated = min(np.random.randint(3, n_true + 1), n_true)
            associated_true = np.random.choice(true_indices, size=n_associated, replace=False)
        
        correlation_map[corr_idx] = associated_true
        
        # Create correlation with these specific true variables
        for true_idx in associated_true:
            Sigma[corr_idx, true_idx] = Sigma[true_idx, corr_idx] = rho * np.sqrt(Sigma[corr_idx, corr_idx] * Sigma[true_idx, true_idx])
        
        print(f"   X{corr_idx} correlated with: {['X' + str(i) for i in associated_true]}")
    
    # 4. Add small ridge for numerical stability
    if ridge > 0:
        print(f"\n4. Adding ridge {ridge} for numerical stability...")
        Sigma += ridge * np.eye(p)
    
    # 5. Verify positive definite
    min_eig = np.min(np.linalg.eigvalsh(Sigma))
    print(f"\n5. Checking covariance matrix:")
    print(f"   Minimum eigenvalue: {min_eig:.6f}")
    if min_eig < 0:
        fix = abs(min_eig) + 0.1
        Sigma += fix * np.eye(p)
        print(f"   Added {fix:.6f} to diagonal for positive definiteness")
    else:
        print(f"   ✓ Positive definite")
    
    # 6. Generate X from multivariate normal
    print(f"\n6. Generating design matrix X ~ N(0, Σ)...")
    X = np.random.multivariate_normal(np.zeros(p), Sigma, size=n)
    
    # DON'T standardize here - do it after to preserve correlation structure
    # Just center
    X = X - X.mean(axis=0)
    
    # 7. Generate response with STRONG signal
    print(f"\n7. Generating response y = Xβ + ε...")
    beta_true = np.zeros(p)
    
    # STRONG coefficients for true variables (like in paper: β = 1.0)
    beta_true[true_indices] = 1.0  # All equal and strong
    
    print(f"   True coefficients β:")
    for idx in true_indices:
        print(f"     β{idx} = {beta_true[idx]:.3f}")
    
    # SMALL noise (high SNR like in paper: σ² = 0.25, so σ = 0.5)
    noise_std = 0.25  # Smaller noise for stronger signal
    y = X @ beta_true + noise_std * np.random.randn(n)
    
    signal_power = np.var(X @ beta_true)
    noise_power = noise_std ** 2
    snr = signal_power / noise_power
    
    print(f"   Noise std: {noise_std}")
    print(f"   Signal-to-noise ratio: {snr:.2f}")
    
    # Verify empirical correlations
    print(f"\n8. Verifying empirical correlations:")
    print(f"   Sample correlations (should be ≈ {rho}):")
    for corr_idx in corr_indices:
        associated = correlation_map[corr_idx]
        empirical_corrs = [np.corrcoef(X[:, corr_idx], X[:, true_idx])[0, 1] for true_idx in associated]
        print(f"     X{corr_idx}: {np.mean(empirical_corrs):.3f} (mean with its associated true vars)")
    
    print("\n" + "=" * 80)
    
    return X, y, true_indices, corr_indices, correlation_map


#%%
def stability_selection_with_randomization(X, y, n_bootstrap=100, lambda_grid=None, alpha_weak=0.2):
    """
    Run stability selection with randomized LASSO.
    """
    rng = np.random.RandomState(rng_seed)
    n, p = X.shape
    subsample_size = n // 2
    
    if lambda_grid is None:
        lambda_max = np.max(np.abs(X.T @ y)) / n
        lambda_min = lambda_max * 0.01
        lambda_grid = np.logspace(np.log10(lambda_min), np.log10(lambda_max), 50)
    
    n_lambdas = len(lambda_grid)
    selections = np.zeros((n_bootstrap, p, n_lambdas))
    
    print(f"\nRunning stability selection:")
    print(f"  Bootstrap samples: {n_bootstrap}")
    print(f"  Subsample size: {subsample_size}")
    print(f"  Lambda grid: {n_lambdas} values")
    print(f"  Randomization: α = {alpha_weak}")
    
    for b in range(n_bootstrap):
        # Subsample
        subsample_idx = np.random.choice(n, size=subsample_size, replace=False)
        X_sub = X[subsample_idx, :]
        y_sub = y[subsample_idx]
        
        # Random weights
        weights = np.random.uniform(alpha_weak, 1.0, p)
        X_weighted = X_sub / weights
        
        # Fit LASSO for each lambda
        for l_idx, lam in enumerate(lambda_grid):
            lasso = Lasso(alpha=lam, max_iter=5000, tol=1e-4)
            lasso.fit(X_weighted, y_sub)
            selections[b, :, l_idx] = (lasso.coef_ != 0).astype(int)
        
        if (b + 1) % 20 == 0:
            print(f"  Progress: {b + 1}/{n_bootstrap}")
    
    # Compute selection probabilities
    selection_probs = selections.mean(axis=0)
    
    return selection_probs, lambda_grid


#%%
def plot_figure4_style(selection_probs, lambda_grid, true_indices, corr_indices, 
                       correlation_map, n_true, alpha_value, title_suffix=""):
    """
    Create Figure 4 style plot showing stability paths.
    """
    
    # Normalize lambda for x-axis (reversed like in paper)
    lambda_normalized = 1 - (lambda_grid - lambda_grid.min()) / (lambda_grid.max() - lambda_grid.min())
    
    fig, ax = plt.subplots(figsize=(10, 6))
    
    # 1. Plot TRUE variables (RED SOLID)
    print(f"\nPlotting true variables (red solid):")
    for idx in true_indices:
        max_prob = selection_probs[idx, :].max()
        ax.plot(lambda_normalized, selection_probs[idx, :], 'r-', 
                linewidth=2.5, alpha=0.8)
        print(f"  X{idx}: max Π = {max_prob:.3f}")
    
    # Add one to legend
    ax.plot([], [], 'r-', linewidth=2.5, alpha=0.8, 
            label=f'True variables (X0-X{n_true-1})')
    
    # 2. Plot CORRELATED FALSE variables (BLUE DASHED)
    print(f"\nPlotting correlated false positives (blue dashed):")
    for idx in corr_indices:
        max_prob = selection_probs[idx, :].max()
        associated = correlation_map[idx]
        ax.plot(lambda_normalized, selection_probs[idx, :], 'b--', 
                linewidth=2.5, alpha=0.8)
        print(f"  X{idx} (corr. with {list(associated)}): max Π = {max_prob:.3f}")
    
    # Add one to legend
    ax.plot([], [], 'b--', linewidth=2.5, alpha=0.8,
            label=f'Correlated false (X{n_true}-X{n_true+len(corr_indices)-1})')
    
    # 3. Plot NOISE variables (GRAY)
    print(f"\nPlotting noise variables (gray):")
    noise_start = n_true + len(corr_indices)
    n_noise_to_plot = min(50, selection_probs.shape[0] - noise_start)
    noise_indices_plot = range(noise_start, noise_start + n_noise_to_plot)
    
    max_noise_prob = 0
    for idx in noise_indices_plot:
        max_prob = selection_probs[idx, :].max()
        max_noise_prob = max(max_noise_prob, max_prob)
        ax.plot(lambda_normalized, selection_probs[idx, :], 'gray', 
                linewidth=0.3, alpha=0.3)
    
    print(f"  Plotted {n_noise_to_plot} noise variables")
    print(f"  Max probability among noise: {max_noise_prob:.3f}")
    
    # Add one to legend
    ax.plot([], [], 'gray', linewidth=0.3, alpha=0.3,
            label=f'Noise variables (X{noise_start}+)')
    
    # Formatting
    ax.set_xlabel('λ', fontsize=14, fontweight='bold')
    ax.set_ylabel('Π (Selection Probability)', fontsize=14, fontweight='bold')
    ax.set_title(f'Stability Paths - Randomized LASSO (α={alpha_value}){title_suffix}', 
                 fontsize=15, fontweight='bold')
    ax.set_xlim([1.0, 0.0])
    ax.set_ylim([0, 1.05])
    ax.legend(fontsize=11, loc='upper left')
    ax.grid(True, alpha=0.3)
    
    return fig


#%%
def analyze_results(selection_probs, true_indices, corr_indices, threshold=0.85):
    """
    Analyze which variables would be selected at different thresholds.
    """
    
    print("\n" + "=" * 80)
    print("ANALYSIS OF RESULTS")
    print("=" * 80)
    
    # Get max probabilities
    max_probs = selection_probs.max(axis=1)
    
    # Analyze at threshold
    selected = np.where(max_probs >= threshold)[0]
    
    print(f"\nWith threshold π_thr = {threshold}:")
    print(f"  Total variables selected: {len(selected)}")
    
    # True positives
    tp = set(selected) & set(true_indices)
    print(f"\n  True Positives: {len(tp)}/{len(true_indices)}")
    if len(tp) > 0:
        print(f"    Selected: {list(tp)}")
    fn = set(true_indices) - set(selected)
    if len(fn) > 0:
        print(f"    Missed: {list(fn)}")
    
    # False positives from correlated
    fp_corr = set(selected) & set(corr_indices)
    print(f"\n  False Positives (correlated): {len(fp_corr)}/{len(corr_indices)}")
    if len(fp_corr) > 0:
        print(f"    Selected: {list(fp_corr)}")
    
    # False positives from noise
    noise_start = max(corr_indices) + 1
    fp_noise = [x for x in selected if x >= noise_start]
    print(f"\n  False Positives (noise): {len(fp_noise)}")
    if len(fp_noise) > 0:
        print(f"    Selected: {fp_noise[:5]}{'...' if len(fp_noise) > 5 else ''}")
    
    # Summary metrics
    tpr = len(tp) / len(true_indices)
    fpr = (len(fp_corr) + len(fp_noise)) / (selection_probs.shape[0] - len(true_indices))
    
    print(f"\n  Performance Metrics:")
    print(f"    True Positive Rate: {tpr:.3f}")
    print(f"    False Positive Rate: {fpr:.4f}")
    
    # Key question: Can we separate?
    print(f"\n  Separation Analysis:")
    min_true_prob = min([max_probs[i] for i in true_indices])
    max_corr_prob = max([max_probs[i] for i in corr_indices]) if len(corr_indices) > 0 else 0
    gap = min_true_prob - max_corr_prob
    
    print(f"    Min probability (true vars): {min_true_prob:.3f}")
    print(f"    Max probability (corr false): {max_corr_prob:.3f}")
    print(f"    Gap: {gap:.3f}")
    
    if gap > 0.15:
        print(f"    ✓ Good separation! Can distinguish true from false.")
    elif gap > 0.05:
        print(f"    ~ Moderate separation. Careful threshold selection needed.")
    else:
        print(f"    ✗ Poor separation. Hard to distinguish.")
    
    return {
        'selected': selected,
        'TP': len(tp),
        'FP_corr': len(fp_corr),
        'FP_noise': len(fp_noise),
        'TPR': tpr,
        'FPR': fpr,
        'gap': gap
    }


#%% [markdown]
# # Main Execution

#%%
print("\n" + "=" * 80)
print("SOPHISTICATED DATA GENERATION AND ANALYSIS")
print("=" * 80)

# Generate data with settings like Figure 4
X, y, true_indices, corr_indices, correlation_map = generate_sophisticated_data(
    n=200,
    p=200,
    n_true=5,             # Start with fewer for clearer separation
    n_correlated=3,       # Fewer correlated false positives
    rho=0.4,             # High correlation to test (paper uses 0.8)
    within_S_corr=0.0,   # Keep true variables independent
    ridge=0.01           # Numerical stability
)

# Run stability selection with randomization
print("\n" + "=" * 80)
print("RUNNING STABILITY SELECTION")
print("=" * 80)

selection_probs, lambda_grid = stability_selection_with_randomization(
    X, y, 
    n_bootstrap=100,
    alpha_weak=0.2  # Strong randomization
)

print("\n✓ Stability selection complete!")

# Create Figure 4 style plot
print("\n" + "=" * 80)
print("CREATING VISUALIZATION")
print("=" * 80)

fig = plot_figure4_style(
    selection_probs, 
    lambda_grid, 
    true_indices, 
    corr_indices,
    correlation_map,
    len(true_indices),
    alpha_value=0.2,
    title_suffix=" (Sophisticated Structure)"
)

plt.tight_layout()
plt.savefig('sophisticated_stability_paths.png', dpi=300, bbox_inches='tight')
print("\n✓ Figure saved as 'sophisticated_stability_paths.png'")
plt.show()

# Analyze results
results = analyze_results(
    selection_probs, 
    true_indices, 
    corr_indices,
    threshold=0.85
)

print("\n" + "=" * 80)
print("CONCLUSION")
print("=" * 80)
print("""
Expected behavior:
✓ Red lines (true variables) should reach ~1.0
✓ Blue dashed lines (correlated false) should drop below red lines
✓ Gray lines (noise) should stay low

If blue lines are still high:
- Try increasing n_bootstrap to 150
- Try lower alpha_weak (0.1 instead of 0.2)
- Correlation structure might be very challenging

If red lines don't reach 1.0:
- Some true variables might have weak signals
- Increase their coefficients in data generation
- Or accept that weak signals are hard to detect
""")

#%% [markdown]
# ## Optional: Compare Multiple α Values

#%%
print("\n" + "=" * 80)
print("OPTIONAL: COMPARING DIFFERENT α VALUES")
print("=" * 80)

alpha_values = [1.0, 0.5, 0.2]
fig_compare, axes = plt.subplots(1, 3, figsize=(18, 5))

for idx, alpha in enumerate(alpha_values):
    print(f"\nRunning with α = {alpha}...")
    
    sel_probs_alpha, _ = stability_selection_with_randomization(
        X, y, 
        n_bootstrap=100,
        lambda_grid=lambda_grid,
        alpha_weak=alpha
    )
    
    # Plot on subplot
    ax = axes[idx]
    lambda_normalized = 1 - (lambda_grid - lambda_grid.min()) / (lambda_grid.max() - lambda_grid.min())
    
    # True variables (red)
    for tidx in true_indices:
        ax.plot(lambda_normalized, sel_probs_alpha[tidx, :], 'r-', linewidth=2, alpha=0.8)
    
    # Correlated false (blue dashed)
    for cidx in corr_indices:
        ax.plot(lambda_normalized, sel_probs_alpha[cidx, :], 'b--', linewidth=2, alpha=0.8)
    
    # Noise (gray)
    for nidx in range(max(corr_indices)+1, min(max(corr_indices)+50, sel_probs_alpha.shape[0])):
        ax.plot(lambda_normalized, sel_probs_alpha[nidx, :], 'gray', linewidth=0.3, alpha=0.3)
    
    ax.plot([], [], 'r-', linewidth=2, label='True')
    ax.plot([], [], 'b--', linewidth=2, label='Correlated false')
    ax.plot([], [], 'gray', linewidth=0.3, label='Noise')
    
    ax.set_xlabel('λ', fontsize=12, fontweight='bold')
    ax.set_ylabel('Π', fontsize=12, fontweight='bold')
    ax.set_title(f'({"abc"[idx]}) α = {alpha}', fontsize=13, fontweight='bold')
    ax.set_xlim([1.0, 0.0])
    ax.set_ylim([0, 1.05])
    ax.legend(fontsize=9)
    ax.grid(True, alpha=0.3)

plt.suptitle('Comparison: Effect of Randomization Strength', fontsize=15, fontweight='bold', y=1.02)
plt.tight_layout()
plt.savefig('comparison_alpha_values.png', dpi=300, bbox_inches='tight')
print("\n✓ Comparison figure saved as 'comparison_alpha_values.png'")
plt.show()

print("\n" + "=" * 80)
print("ALL ANALYSES COMPLETE!")
print("=" * 80)
================================================================================
SOPHISTICATED DATA GENERATION AND ANALYSIS
================================================================================
================================================================================
DATA GENERATION - FIGURE 4 STYLE
================================================================================

Variable groups:
  True variables: [np.int64(0), np.int64(1), np.int64(2), np.int64(3), np.int64(4)] (indices 0-4)
  Correlated false: [np.int64(5), np.int64(6), np.int64(7)] (indices 5-7)
  Noise: 192 variables

1. Setting variances for true variables...
   True variable variances: [0.8 0.9 1.  1.1 1.2]

2. True variables are independent (within_S_corr = 0)

3. Creating correlation structure for false positives (ρ=0.4)...
   X5 correlated with: ['X1', 'X0', 'X3', 'X2']
   X6 correlated with: ['X0', 'X4', 'X2', 'X1']
   X7 correlated with: ['X2', 'X4', 'X3', 'X1']

4. Adding ridge 0.01 for numerical stability...

5. Checking covariance matrix:
   Minimum eigenvalue: -0.251158
   Added 0.351158 to diagonal for positive definiteness

6. Generating design matrix X ~ N(0, Σ)...

7. Generating response y = Xβ + ε...
   True coefficients β:
     β0 = 1.000
     β1 = 1.000
     β2 = 1.000
     β3 = 1.000
     β4 = 1.000
   Noise std: 0.25
   Signal-to-noise ratio: 111.54

8. Verifying empirical correlations:
   Sample correlations (should be ≈ 0.4):
     X5: 0.318 (mean with its associated true vars)
     X6: 0.234 (mean with its associated true vars)
     X7: 0.249 (mean with its associated true vars)

================================================================================

================================================================================
RUNNING STABILITY SELECTION
================================================================================

Running stability selection:
  Bootstrap samples: 100
  Subsample size: 100
  Lambda grid: 50 values
  Randomization: α = 0.2
  Progress: 20/100
  Progress: 40/100
  Progress: 60/100
  Progress: 80/100
  Progress: 100/100

✓ Stability selection complete!

================================================================================
CREATING VISUALIZATION
================================================================================

Plotting true variables (red solid):
  X0: max Π = 1.000
  X1: max Π = 1.000
  X2: max Π = 1.000
  X3: max Π = 1.000
  X4: max Π = 1.000

Plotting correlated false positives (blue dashed):
  X5 (corr. with [np.int64(1), np.int64(0), np.int64(3), np.int64(2)]): max Π = 0.840
  X6 (corr. with [np.int64(0), np.int64(4), np.int64(2), np.int64(1)]): max Π = 0.870
  X7 (corr. with [np.int64(2), np.int64(4), np.int64(3), np.int64(1)]): max Π = 0.790

Plotting noise variables (gray):
  Plotted 50 noise variables
  Max probability among noise: 0.640

✓ Figure saved as 'sophisticated_stability_paths.png'
No description has been provided for this image
================================================================================
ANALYSIS OF RESULTS
================================================================================

With threshold π_thr = 0.85:
  Total variables selected: 7

  True Positives: 5/5
    Selected: [np.int64(0), np.int64(1), np.int64(2), np.int64(3), np.int64(4)]

  False Positives (correlated): 1/3
    Selected: [np.int64(6)]

  False Positives (noise): 1
    Selected: [np.int64(163)]

  Performance Metrics:
    True Positive Rate: 1.000
    False Positive Rate: 0.0103

  Separation Analysis:
    Min probability (true vars): 1.000
    Max probability (corr false): 0.870
    Gap: 0.130
    ~ Moderate separation. Careful threshold selection needed.

================================================================================
CONCLUSION
================================================================================

Expected behavior:
✓ Red lines (true variables) should reach ~1.0
✓ Blue dashed lines (correlated false) should drop below red lines
✓ Gray lines (noise) should stay low

If blue lines are still high:
- Try increasing n_bootstrap to 150
- Try lower alpha_weak (0.1 instead of 0.2)
- Correlation structure might be very challenging

If red lines don't reach 1.0:
- Some true variables might have weak signals
- Increase their coefficients in data generation
- Or accept that weak signals are hard to detect


================================================================================
OPTIONAL: COMPARING DIFFERENT α VALUES
================================================================================

Running with α = 1.0...

Running stability selection:
  Bootstrap samples: 100
  Subsample size: 100
  Lambda grid: 50 values
  Randomization: α = 1.0
  Progress: 20/100
  Progress: 40/100
  Progress: 60/100
  Progress: 80/100
  Progress: 100/100

Running with α = 0.5...

Running stability selection:
  Bootstrap samples: 100
  Subsample size: 100
  Lambda grid: 50 values
  Randomization: α = 0.5
  Progress: 20/100
  Progress: 40/100
  Progress: 60/100
  Progress: 80/100
  Progress: 100/100

Running with α = 0.2...

Running stability selection:
  Bootstrap samples: 100
  Subsample size: 100
  Lambda grid: 50 values
  Randomization: α = 0.2
  Progress: 20/100
  Progress: 40/100
  Progress: 60/100
  Progress: 80/100
  Progress: 100/100

✓ Comparison figure saved as 'comparison_alpha_values.png'
No description has been provided for this image
================================================================================
ALL ANALYSES COMPLETE!
================================================================================
In [ ]:
import numpy as np
from sklearn.linear_model import Lasso
import matplotlib.pyplot as plt

# ============================================================
# 1. DATA: Extended Figure-4 style correlated structure
# ============================================================
def generate_figure4_correct(n=200, p=200,
                             n_true=6,
                             n_corr=3,
                             rho=0.8,
                             noise_std=0.25,
                             seed=0):

    rng = np.random.RandomState(seed)

    true_idx = np.arange(n_true)
    corr_idx = np.arange(n_true, n_true + n_corr)
    noise_idx = np.arange(n_true + n_corr, p)

    Sigma = np.eye(p)

    # 1. True variables = independent, variance 1
    for i in true_idx:
        Sigma[i, i] = 1.0

    # 2. Correlated false variables: each correlated with EXACTLY ONE true variable
    # THIS is the key fix.
    for j, cj in enumerate(corr_idx):
        t = true_idx[j % n_true]  # one true parent variable
        Sigma[cj, t] = Sigma[t, cj] = rho
        Sigma[cj, cj] = 1.0

    # 3. Noise
    for j in noise_idx:
        Sigma[j, j] = 1.0

    # Ensure PD
    eigvals = np.linalg.eigvalsh(Sigma)
    if eigvals.min() < 1e-6:
        Sigma += (1e-6 - eigvals.min()) * np.eye(p)

    # Generate X
    X = rng.multivariate_normal(np.zeros(p), Sigma, size=n)
    X -= X.mean(axis=0)

    # Strong true coefficients
    beta = np.zeros(p)
    beta[true_idx] = 1.0

    # y = Xβ + ε
    y = X @ beta + noise_std * rng.randn(n)

    return X, y, true_idx, corr_idx, noise_idx


# ============================================================
# 2. Stability Selection (correct implementation)
# ============================================================

def stability_selection_randomized(X, y, lambda_grid=None,
                                   alpha_weak=0.5,
                                   B=100,
                                   seed=0):

    rng = np.random.RandomState(seed)
    n, p = X.shape
    m = n // 2

    # Standardize X columns (unit ℓ2 norm)
    Xs = X / np.linalg.norm(X, axis=0, keepdims=True)

    # Build lambda grid if needed
    if lambda_grid is None:
        lam_max = np.max(np.abs(Xs.T @ y))     # correct λ_max
        lambda_grid = np.logspace(np.log10(lam_max),
                                  np.log10(lam_max * 0.01),
                                  50)

    L = len(lambda_grid)
    select = np.zeros((B, p, L))

    for b in range(B):
        idx = rng.choice(n, m, replace=False)
        X_sub = Xs[idx, :]
        y_sub = y[idx]

        # Bernoulli weights in {1, α}
        W = np.where(rng.rand(p) < 0.5, 1.0, alpha_weak)
        Xw = X_sub / W

        for j, lam in enumerate(lambda_grid):
            # Convert to sklearn alpha (objective is 1/(2m)||·||^2 + α||β||₁)
            alpha_sklearn = lam / (2 * m)

            model = Lasso(alpha=alpha_sklearn,
                          fit_intercept=False,
                          max_iter=8000,
                          tol=1e-5)
            model.fit(Xw, y_sub)

            select[b, :, j] = (model.coef_ != 0).astype(float)

    return select.mean(axis=0), lambda_grid


# ============================================================
# 3. Plot (matches Figure 4 style)
# ============================================================

def plot_paths(selection_probs, lambda_grid, true_idx, corr_idx, noise_idx,
               alpha_value, ax):

    # Normalize λ to go from 1 → 0 (paper’s style)
    lam_norm = 1 - (lambda_grid - lambda_grid.min()) / (lambda_grid.max() - lambda_grid.min())

    # TRUE variables — RED
    for k in true_idx:
        ax.plot(lam_norm, selection_probs[k, :],
                color='darkred', linewidth=2)

    # CORRELATED — BLUE dashed
    for k in corr_idx:
        ax.plot(lam_norm, selection_probs[k, :],
                color='navy', linestyle='--', linewidth=2)

    # NOISE — many thin gray lines
    for k in noise_idx[:80]:        # plot subset to avoid clutter
        ax.plot(lam_norm, selection_probs[k, :],
                color='gray', linewidth=0.4, alpha=0.3)

    ax.set_xlim(1, 0)
    ax.set_ylim(0, 1)
    ax.set_xlabel("λ")
    ax.set_ylabel("Π")
    ax.set_title(f"α = {alpha_value}")



# ============================================================
# 4. FULL RUN (copy/paste this block)
# ============================================================

# ---- Data ----
X, y, true_idx, corr_idx, noise_idx = generate_extended_figure4(
    n=200,
    p=200,
    n_true=6,
    n_corr=3,
    rho=0.8,      # HIGH correlation → correlated variables look "important"
    noise_std=0.25,
    seed=1
)

# ---- α = 1.0, 0.5, 0.2 ----
sel_a1, lam_grid = stability_selection_randomized(X, y, alpha_weak=1.0, B=100)
sel_a05, _       = stability_selection_randomized(X, y, lambda_grid=lam_grid, alpha_weak=0.5, B=100)
sel_a02, _       = stability_selection_randomized(X, y, lambda_grid=lam_grid, alpha_weak=0.2, B=100)

# ---- Plot ----
fig, axes = plt.subplots(1, 3, figsize=(18, 5))

plot_paths(sel_a1,  lam_grid, true_idx, corr_idx, noise_idx, 1.0, axes[0])
axes[0].set_title("(a) α = 1.0")

plot_paths(sel_a05, lam_grid, true_idx, corr_idx, noise_idx, 0.5, axes[1])
axes[1].set_title("(b) α = 0.5")

plot_paths(sel_a02, lam_grid, true_idx, corr_idx, noise_idx, 0.2, axes[2])
axes[2].set_title("(c) α = 0.2")

plt.tight_layout()
plt.show()
No description has been provided for this image
In [ ]:
def generate_figure4_data(n=200, p=200, n_true=6, n_corr=3, rho=0.8, noise_std=0.25):

    rng = np.random.RandomState(0)

    true_idx = np.arange(n_true)
    corr_idx = np.arange(n_true, n_true + n_corr)
    noise_idx = np.arange(n_true + n_corr, p)

    # Covariance matrix
    Sigma = np.eye(p)

    # TRUE variables: independent, var=1
    # CORRELATED variables: correlated with exactly ONE true variable
    for k, j in enumerate(corr_idx):
        t = true_idx[k % n_true]
        Sigma[j, t] = Sigma[t, j] = rho   # only one parent

    # Force positive definite
    w, v = np.linalg.eigh(Sigma)
    Sigma = v @ np.diag(np.maximum(w, 1e-6)) @ v.T

    # Generate X
    X = rng.multivariate_normal(np.zeros(p), Sigma, size=n)
    X -= X.mean(axis=0)

    # True coefficients strong
    beta = np.zeros(p)
    beta[true_idx] = 1.0

    # y = Xβ + noise
    y = X @ beta + noise_std * rng.randn(n)

    return X, y, true_idx, corr_idx, noise_idx
In [ ]:
def make_lambda_grid(X, y, n_lambdas=50):
    Xs = (X - X.mean(axis=0)) / X.std(axis=0)
    n = X.shape[0]
    lambda_max = np.max(np.abs(Xs.T @ y)) / n
    lambda_min = lambda_max * 0.001
    lambda_grid = np.logspace(np.log10(lambda_max), np.log10(lambda_min), n_lambdas)
    return lambda_grid, Xs
In [ ]:
from sklearn.linear_model import Lasso

def randomized_lasso_path(X, y, lambda_grid, n_boot=100, alpha_weak=0.5):

    n, p = X.shape
    subsample = n // 2

    selections = np.zeros((n_boot, p, len(lambda_grid)))

    for b in range(n_boot):

        idx = np.random.choice(n, subsample, replace=False)
        Xs, ys = X[idx], y[idx]

        # Bernoulli weights: exactly what M&B use
        w = np.where(np.random.rand(p) < alpha_weak, alpha_weak, 1.0)
        Xw = Xs / w

        for li, lam in enumerate(lambda_grid):
            model = Lasso(alpha=lam, max_iter=5000, fit_intercept=False)
            model.fit(Xw, ys)
            selections[b, :, li] = (model.coef_ != 0).astype(int)

    return selections.mean(0)
In [ ]:
def plot_stability(sel, lambda_grid, true_idx, corr_idx):

    # ---- λ ordering (DESCENDING: left = λ_max) ----
    lambda_order = np.argsort(-lambda_grid)
    lambda_plot = lambda_grid[lambda_order]
    sel_plot = sel[:, lambda_order]

    # ---- noise variable indices ----
    noise_idx = [i for i in range(sel.shape[0]) 
                 if i not in true_idx and i not in corr_idx]

    # ---- figure ----
    fig, ax = plt.subplots(1, 1, figsize=(10, 6))

    # True vars (red solid)
    for idx in true_idx:
        ax.plot(lambda_plot, sel_plot[idx], 'r-', linewidth=2.5, alpha=0.85)

    # Correlated false vars (blue dashed)
    for idx in corr_idx:
        ax.plot(lambda_plot, sel_plot[idx], 'b--', linewidth=2.5, alpha=0.85)

    # Noise vars (gray thin)
    for idx in noise_idx:
        ax.plot(lambda_plot, sel_plot[idx], color='gray', 
                linewidth=0.3, alpha=0.25)

    # ---- axis formatting ----
    ax.set_xlim([lambda_plot[0], lambda_plot[-1]])   # descending λ
    ax.set_ylim([0, 1])

    ax.set_xlabel("λ", fontsize=14)
    ax.set_ylabel("Π", fontsize=14)
    ax.set_title("Stability Path", fontsize=16)

    plt.show()
In [ ]:
X, y, T, C, N = generate_figure4_data()

lam = make_lambda_grid(X, y)

sel_1 = randomized_lasso_path(X, y, lam, alpha_weak=1.0)
sel_05 = randomized_lasso_path(X, y, lam, alpha_weak=0.5)
sel_02 = randomized_lasso_path(X, y, lam, alpha_weak=0.2)
In [ ]:
def plot_three_panels(sel1, sel05, sel02, lambda_grid, true_idx, corr_idx):
    """
    Create Figure 4-style plot with three side-by-side panels for α = 1, 0.5, 0.2.
    """

    lam_norm = 1 - (lambda_grid - lambda_grid.min()) / (lambda_grid.max() - lambda_grid.min())

    fig, axes = plt.subplots(1, 3, figsize=(18, 5))

    sels = [sel1, sel05, sel02]
    alphas = [1.0, 0.5, 0.2]
    labels = ['(a) α = 1.0', '(b) α = 0.5', '(c) α = 0.2']

    for ax, sel, a, lab in zip(axes, sels, alphas, labels):

        # === True variables (red) ===
        for j in true_idx:
            ax.plot(lam_norm, sel[j], color='darkred', lw=2)

        # === Correlated false positives (blue dashed) ===
        for j in corr_idx:
            ax.plot(lam_norm, sel[j], color='navy', lw=2, ls='--')

        # === Noise (gray thin) ===
        p = sel.shape[0]
        for j in range(p):
            if j not in true_idx and j not in corr_idx:
                ax.plot(lam_norm, sel[j], color='gray', lw=0.4, alpha=0.25)

        # === Formatting ===
        ax.set_xlim(1, 0)
        ax.set_ylim(0, 1.05)
        ax.set_xlabel("λ", fontsize=12)
        ax.set_title(lab, fontsize=14)
        ax.grid(False)

    axes[0].set_ylabel("Π", fontsize=12)
    plt.tight_layout()
    plt.show()
In [ ]:
plot_three_panels(sel_1, sel_05, sel_02, lam, T, C)
No description has been provided for this image
In [ ]:
print("λ_grid (raw):", lambda_grid[:10])
print("min λ:", lambda_grid.min())
print("max λ:", lambda_grid.max())

largest_lambda_idx = np.argmax(lambda_grid)
print("Largest λ:", lambda_grid[largest_lambda_idx])
print("Selection at largest λ:", sel[:, largest_lambda_idx].sum())
λ_grid (raw): [0.01858624 0.02041775 0.02242973 0.02463999 0.02706804 0.02973535
 0.03266551 0.03588441 0.0394205  0.04330504]
min λ: 0.018586237372516037
max λ: 1.858623737251604
Largest λ: 1.858623737251604
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[50], line 7
      5 largest_lambda_idx = np.argmax(lambda_grid)
      6 print("Largest λ:", lambda_grid[largest_lambda_idx])
----> 7 print("Selection at largest λ:", sel[:, largest_lambda_idx].sum())

NameError: name 'sel' is not defined
In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso


# =========================================================
# 1. DATA GENERATION (EXTENDED FIGURE 4 VERSION)
# =========================================================
def generate_extended_data(n=200, p=200, n_true=6, n_corr=3, rho=0.8, noise_std=0.25):
    """
    Extended version of Figure 4:
    - n_true strong variables (all β = 1)
    - n_corr correlated false positives, each correlated with 1 true variable
    - remaining variables are pure noise
    """

    rng = np.random.RandomState(0)

    true_idx = np.arange(n_true)
    corr_idx = np.arange(n_true, n_true + n_corr)
    noise_idx = np.arange(n_true + n_corr, p)

    # --- Build covariance matrix ---
    Sigma = np.eye(p)

    # For each correlated variable j, correlate with ONE true variable t
    for k, j in enumerate(corr_idx):
        t = true_idx[k % n_true]
        Sigma[j, t] = Sigma[t, j] = rho

    # Force positive definitive
    w, v = np.linalg.eigh(Sigma)
    Sigma = v @ np.diag(np.maximum(w, 1e-8)) @ v.T

    # --- Generate X ---
    X = rng.multivariate_normal(np.zeros(p), Sigma, size=n)
    X -= X.mean(axis=0)

    # --- True coefficients ---
    beta = np.zeros(p)
    beta[true_idx] = 1.0

    # --- Generate y ---
    y = X @ beta + noise_std * rng.randn(n)

    return X, y, true_idx, corr_idx, noise_idx



# =========================================================
# 2. LAMBDA GRID (CORRECT)
# =========================================================
def make_lambda_grid(X, y, n_lambdas=50):
    """
    Computes λ_max correctly on standardized X.
    Outputs a decreasing λ-grid as needed for stability paths.
    """

    Xs = (X - X.mean(axis=0)) / X.std(axis=0)
    n = X.shape[0]

    lambda_max = np.max(np.abs(Xs.T @ y)) / n
    lambda_min = lambda_max * 0.001

    lambda_grid = np.logspace(np.log10(lambda_max),
                              np.log10(lambda_min),
                              n_lambdas)

    return lambda_grid, Xs



# =========================================================
# 3. RANDOMIZED LASSO STABILITY SELECTION
# =========================================================
def randomized_lasso_path(X, y, lambda_grid, n_boot=100, alpha_weak=1.0):

    n, p = X.shape
    subsample = n // 2

    selections = np.zeros((n_boot, p, len(lambda_grid)))

    for b in range(n_boot):

        idx = np.random.choice(n, subsample, replace=False)
        Xs, ys = X[idx], y[idx]

        # Bernoulli random weights EXACTLY like Meinshausen & Bühlmann
        w = np.where(np.random.rand(p) < alpha_weak, alpha_weak, 1.0)
        Xw = Xs / w

        for li, lam in enumerate(lambda_grid):

            alpha_sklearn = lam / (2 * subsample)   # correct scaling!
            model = Lasso(alpha=alpha_sklearn, fit_intercept=False, max_iter=5000)
            model.fit(Xw, ys)

            selections[b, :, li] = (model.coef_ != 0).astype(int)

    return selections.mean(0)



# =========================================================
# 4. THREE-PANEL FIGURE (MATCHES FIGURE 4 STYLE)
# =========================================================
def plot_three_panels(sel1, sel05, sel02, lambda_grid, true_idx, corr_idx):

    # reverse λ-axis so left = λ_max
    lam_norm = 1 - (lambda_grid - lambda_grid.min()) / (lambda_grid.max() - lambda_grid.min())

    fig, axes = plt.subplots(1, 3, figsize=(18, 5))
    sels = [sel1, sel05, sel02]
    alphas = [1.0, 0.5, 0.2]
    labels = ['(a) α = 1.0', '(b) α = 0.5', '(c) α = 0.2']

    for ax, sel, a, lab in zip(axes, sels, alphas, labels):

        # True variables (red)
        for j in true_idx:
            ax.plot(lam_norm, sel[j], color='darkred', lw=2)

        # Correlated false positives (blue dashed)
        for j in corr_idx:
            ax.plot(lam_norm, sel[j], color='navy', lw=2, ls='--')

        # Noise variables (gray)
        p = sel.shape[0]
        for j in range(p):
            if j not in true_idx and j not in corr_idx:
                ax.plot(lam_norm, sel[j], color='gray', lw=0.3, alpha=0.25)

        ax.set_xlim(1, 0)   # reverse axis
        ax.set_ylim(0, 1.05)
        ax.set_xlabel("λ", fontsize=12)
        ax.set_title(lab, fontsize=14)
        ax.grid(False)

    axes[0].set_ylabel("Π", fontsize=12)
    plt.tight_layout()
    plt.show()



# =========================================================
# 5. MAIN SCRIPT
# =========================================================
# Step A: Generate extended data
X, y, T, C, N = generate_extended_data(
    n_true=6,     # 6 true variables
    n_corr=3,     # 3 correlated false positives
    rho=0.8       # high correlation > 0.5
)

# Step B: Generate λ-grid
lambda_grid, X_std = make_lambda_grid(X, y)
y_centered = y - y.mean()

# Step C: Run stability selection paths
sel_1  = randomized_lasso_path(X_std, y_centered, lambda_grid, alpha_weak=1.0)
sel_05 = randomized_lasso_path(X_std, y_centered, lambda_grid, alpha_weak=0.5)
sel_02 = randomized_lasso_path(X_std, y_centered, lambda_grid, alpha_weak=0.2)

# Step D: Plot the 3-panel Figure 4 style result
plot_three_panels(sel_1, sel_05, sel_02, lambda_grid, T, C)
No description has been provided for this image
In [ ]:
# quick diagnostic
print("lambda_grid first 5:", lambda_grid[:5])
print("lambda_grid last 5 :", lambda_grid[-5:])
print("lambda_grid descending?", np.all(np.diff(lambda_grid) < 0))
print("lambda_max (should be left):", lambda_grid[0], "lambda_min (right):", lambda_grid[-1])
lambda_grid first 5: [1.31820073 1.14487232 0.99433464 0.86359094 0.75003855]
lambda_grid last 5 : [0.00231675 0.00201213 0.00174755 0.00151777 0.0013182 ]
lambda_grid descending? True
lambda_max (should be left): 1.3182007278157886 lambda_min (right): 0.0013182007278157888
In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso


# ======================================================================
# 1. EXTENDED DATA GENERATION
# ======================================================================
def generate_extended_data(n=200, p=200, n_true=6, n_corr=3, rho=0.8,
                           noise_std=0.5, seed=0):
    rng = np.random.default_rng(seed)

    # ---- Build covariance matrix ----
    Sigma = np.eye(p)

    # Put random diagonal entries (professor requirement)
    diag_entries = rng.uniform(0.5, 2.0, size=p)
    np.fill_diagonal(Sigma, diag_entries)

    true_idx = np.arange(n_true)
    corr_idx = np.arange(n_true, n_true + n_corr)

    for k, j in enumerate(corr_idx):
        t = true_idx[k % n_true]
        Sigma[j, t] = Sigma[t, j] = rho

    # Make positive semidefinite
    w, v = np.linalg.eigh(Sigma)
    Sigma = v @ np.diag(np.maximum(w, 1e-8)) @ v.T

    # ---- Generate predictors ----
    X = rng.multivariate_normal(np.zeros(p), Sigma, n)

    # Standardize columns ONCE
    X = (X - X.mean(axis=0)) / X.std(axis=0)

    # ---- True coefficients ----
    beta = np.zeros(p)
    beta[true_idx] = 1

    # ---- Generate response ----
    y = X @ beta + noise_std * rng.normal(size=n)

    return X, y, true_idx, corr_idx



# ======================================================================
# 2. LAMBDA GRID (PAPER CORRECT)
# ======================================================================
def make_lambda_grid(X, y, n_lambdas=50):
    # PAPER λ_max = max |Xᵀ y|
    lam_max = np.max(np.abs(X.T @ y))
    lam_min = lam_max * 0.01

    # Decreasing grid (1 → 0)
    lam_grid = np.linspace(lam_max, lam_min, n_lambdas)
    return lam_grid, lam_max



# ======================================================================
# 3. SINGLE STABILITY RUN
# ======================================================================
def stability_single_run(X, y, subsample_idx, lam,
                         randomized, alpha_weak, rng):
    Xs = X[subsample_idx]
    ys = y[subsample_idx]
    m = len(subsample_idx)

    if randomized:
        # UNIFORM weights (paper correct!)
        W = rng.uniform(alpha_weak, 1.0, X.shape[1])
        Xs = Xs / W

    alpha_sklearn = lam / (2 * m)

    model = Lasso(alpha=alpha_sklearn, fit_intercept=False,
                  max_iter=20000, tol=1e-6)
    model.fit(Xs, ys)

    return (model.coef_ != 0).astype(float)



# ======================================================================
# 4. FULL STABILITY SELECTION
# ======================================================================
def stability_path(X, y, lam_grid, B=200,
                   randomized=False, alpha_weak=1.0, seed=0):
    rng = np.random.default_rng(seed)
    n, p = X.shape

    m = n // 2
    L = len(lam_grid)

    sel = np.zeros((p, L))

    for b in range(B):
        subsample = rng.choice(n, m, replace=False)

        for j, lam in enumerate(lam_grid):
            sel[:, j] += stability_single_run(
                X, y, subsample, lam,
                randomized=randomized,
                alpha_weak=alpha_weak,
                rng=rng
            )

        if (b + 1) % 20 == 0:
            print(f"Completed {b+1}/{B}")

    return sel / B



# ======================================================================
# 5. PLOTTING
# ======================================================================
def plot_three_panels(sel1, sel05, sel02, lam_grid,
                      true_idx, corr_idx):

    lam_norm = lam_grid / lam_grid[0]   # normalized to [1 → 0]

    fig, axes = plt.subplots(1, 3, figsize=(18, 5))
    datasets = [(sel1, 1.0), (sel05, 0.5), (sel02, 0.2)]
    labels = ['(a) α = 1.0', '(b) α = 0.5', '(c) α = 0.2']

    for ax, (sel, alpha), lab in zip(axes, datasets, labels):

        # True vars (red)
        for j in true_idx:
            ax.plot(lam_norm, sel[j], c='darkred', lw=2)

        # Correlated false vars (blue dashed)
        for j in corr_idx:
            ax.plot(lam_norm, sel[j], c='navy', ls='--', lw=2)

        # Noise (faint gray)
        for k in range(sel.shape[0]):
            if k not in true_idx and k not in corr_idx:
                ax.plot(lam_norm, sel[k], c='gray', lw=0.2, alpha=0.2)

        ax.set_xlim(1, 0)
        ax.set_ylim(0, 1.05)
        ax.set_title(lab, fontsize=14)
        ax.set_xlabel("λ", fontsize=12)

    axes[0].set_ylabel("Π", fontsize=12)
    plt.tight_layout()
    plt.show()



# ======================================================================
# 6. MAIN SCRIPT
# ======================================================================
X, y, T, C = generate_extended_data(rho=0.95,
                                    n_true=6,
                                    n_corr=3)

lam_grid, lam_max = make_lambda_grid(X, y)

sel1  = stability_path(X, y, lam_grid, randomized=False, B=100, seed=1)
sel05 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.5, B=100, seed=2)
sel02 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.2, B=100, seed=3)

plot_three_panels(sel1, sel05, sel02, lam_grid, T, C)
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.581e-03, tolerance: 6.730e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.296e-04, tolerance: 5.988e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.597e-04, tolerance: 6.574e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.835e-04, tolerance: 6.574e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.807e-04, tolerance: 6.574e-04
  model = cd_fast.enet_coordinate_descent(
Completed 20/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.020e-03, tolerance: 7.518e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.574e-04, tolerance: 7.518e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.729e-04, tolerance: 6.610e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.987e-04, tolerance: 6.961e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.730e-04, tolerance: 6.961e-04
  model = cd_fast.enet_coordinate_descent(
Completed 40/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.806e-04, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.895e-04, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.652e-04, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.440e-04, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.026e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.111e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.199e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.290e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.384e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.481e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.582e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.669e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.742e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.816e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.890e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.057e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.556e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.685e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.949e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.475e-03, tolerance: 6.118e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.220e-03, tolerance: 5.162e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.038e-04, tolerance: 6.883e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.317e-04, tolerance: 6.883e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.210e-03, tolerance: 6.883e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.162e-04, tolerance: 6.883e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.627e-04, tolerance: 6.883e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.143e-04, tolerance: 6.883e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.240e-03, tolerance: 6.430e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.784e-03, tolerance: 6.430e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.382e-03, tolerance: 6.430e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.613e-04, tolerance: 5.868e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.113e-03, tolerance: 5.868e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.152e-03, tolerance: 5.868e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.243e-04, tolerance: 5.868e-04
  model = cd_fast.enet_coordinate_descent(
Completed 60/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.110e-03, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.004e-03, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.550e-04, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.367e-03, tolerance: 5.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.074e-03, tolerance: 5.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.146e-03, tolerance: 5.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.784e-03, tolerance: 5.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.333e-03, tolerance: 5.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.504e-03, tolerance: 5.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.391e-03, tolerance: 6.262e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.549e-03, tolerance: 6.262e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.327e-03, tolerance: 6.262e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.337e-04, tolerance: 6.262e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.080e-04, tolerance: 6.262e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.065e-03, tolerance: 6.145e-04
  model = cd_fast.enet_coordinate_descent(
Completed 80/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.277e-03, tolerance: 6.339e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.420e-03, tolerance: 6.443e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.253e-04, tolerance: 6.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.841e-04, tolerance: 6.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.028e-03, tolerance: 6.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.693e-04, tolerance: 6.807e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.379e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.512e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.648e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.789e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.921e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.035e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.032e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.819e-03, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.687e-04, tolerance: 8.135e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.606e-03, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.295e-03, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.593e-03, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.588e-03, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.173e-03, tolerance: 6.279e-04
  model = cd_fast.enet_coordinate_descent(
Completed 100/100
Completed 20/100
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
Completed 20/100
Completed 40/100
Completed 60/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.525e-03, tolerance: 7.178e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.773e-03, tolerance: 7.754e-04
  model = cd_fast.enet_coordinate_descent(
Completed 80/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.201e-03, tolerance: 6.131e-04
  model = cd_fast.enet_coordinate_descent(
Completed 100/100
No description has been provided for this image

Same but with var ranging from 0.5 to 10¶

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso


# ======================================================================
# 1. EXTENDED DATA GENERATION
# ======================================================================
def generate_extended_data(n=200, p=200, n_true=6, n_corr=3, rho=0.8,
                           noise_std=0.5, seed=0):
    rng = np.random.default_rng(seed)

    # ---- Build covariance matrix ----
    Sigma = np.eye(p)

    # Put random diagonal entries (professor requirement)
    diag_entries = rng.uniform(0.5, 10.0, size=p)
    np.fill_diagonal(Sigma, diag_entries)

    true_idx = np.arange(n_true)
    corr_idx = np.arange(n_true, n_true + n_corr)

    for k, j in enumerate(corr_idx):
        t = true_idx[k % n_true]
        Sigma[j, t] = Sigma[t, j] = rho

    # Make positive semidefinite
    w, v = np.linalg.eigh(Sigma)
    Sigma = v @ np.diag(np.maximum(w, 1e-8)) @ v.T

    # ---- Generate predictors ----
    X = rng.multivariate_normal(np.zeros(p), Sigma, n)

    # Standardize columns ONCE
    X = (X - X.mean(axis=0)) / X.std(axis=0)

    # ---- True coefficients ----
    beta = np.zeros(p)
    beta[true_idx] = 1

    # ---- Generate response ----
    y = X @ beta + noise_std * rng.normal(size=n)

    return X, y, true_idx, corr_idx



# ======================================================================
# 2. LAMBDA GRID (PAPER CORRECT)
# ======================================================================
def make_lambda_grid(X, y, n_lambdas=50):
    # PAPER λ_max = max |Xᵀ y|
    lam_max = np.max(np.abs(X.T @ y))
    lam_min = lam_max * 0.01

    # Decreasing grid (1 → 0)
    lam_grid = np.linspace(lam_max, lam_min, n_lambdas)
    return lam_grid, lam_max



# ======================================================================
# 3. SINGLE STABILITY RUN
# ======================================================================
def stability_single_run(X, y, subsample_idx, lam,
                         randomized, alpha_weak, rng):
    Xs = X[subsample_idx]
    ys = y[subsample_idx]
    m = len(subsample_idx)

    if randomized:
        # UNIFORM weights (paper correct!)
        W = rng.uniform(alpha_weak, 1.0, X.shape[1])
        Xs = Xs / W

    alpha_sklearn = lam / (2 * m)

    model = Lasso(alpha=alpha_sklearn, fit_intercept=False,
                  max_iter=20000, tol=1e-6)
    model.fit(Xs, ys)

    return (model.coef_ != 0).astype(float)



# ======================================================================
# 4. FULL STABILITY SELECTION
# ======================================================================
def stability_path(X, y, lam_grid, B=200,
                   randomized=False, alpha_weak=1.0, seed=0):
    rng = np.random.default_rng(seed)
    n, p = X.shape

    m = n // 2
    L = len(lam_grid)

    sel = np.zeros((p, L))

    for b in range(B):
        subsample = rng.choice(n, m, replace=False)

        for j, lam in enumerate(lam_grid):
            sel[:, j] += stability_single_run(
                X, y, subsample, lam,
                randomized=randomized,
                alpha_weak=alpha_weak,
                rng=rng
            )

        if (b + 1) % 20 == 0:
            print(f"Completed {b+1}/{B}")

    return sel / B



# ======================================================================
# 5. PLOTTING
# ======================================================================
def plot_three_panels(sel1, sel05, sel02, lam_grid,
                      true_idx, corr_idx):

    lam_norm = lam_grid / lam_grid[0]   # normalized to [1 → 0]

    fig, axes = plt.subplots(1, 3, figsize=(18, 5))
    datasets = [(sel1, 1.0), (sel05, 0.5), (sel02, 0.2)]
    labels = ['(a) α = 1.0', '(b) α = 0.5', '(c) α = 0.2']

    for ax, (sel, alpha), lab in zip(axes, datasets, labels):

        # True vars (red)
        for j in true_idx:
            ax.plot(lam_norm, sel[j], c='darkred', lw=2)

        # Correlated false vars (blue dashed)
        for j in corr_idx:
            ax.plot(lam_norm, sel[j], c='navy', ls='--', lw=2)

        # Noise (faint gray)
        for k in range(sel.shape[0]):
            if k not in true_idx and k not in corr_idx:
                ax.plot(lam_norm, sel[k], c='gray', lw=0.2, alpha=0.2)

        ax.set_xlim(1, 0)
        ax.set_ylim(0, 1.05)
        ax.set_title(lab, fontsize=14)
        ax.set_xlabel("λ", fontsize=12)

    axes[0].set_ylabel("Π", fontsize=12)
    plt.tight_layout()
    plt.show()



# ======================================================================
# 6. MAIN SCRIPT
# ======================================================================
X, y, T, C = generate_extended_data(rho=0.95,
                                    n_true=6,
                                    n_corr=3)

lam_grid, lam_max = make_lambda_grid(X, y)

sel1  = stability_path(X, y, lam_grid, randomized=False, B=100, seed=1)
sel05 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.5, B=100, seed=2)
sel02 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.2, B=100, seed=3)

plot_three_panels(sel1, sel05, sel02, lam_grid, T, C)
Completed 20/100
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
Completed 20/100
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
Completed 20/100
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
No description has been provided for this image

Explanation of why increasing the variance produces the graph is does above¶

When you increase the diagonal variances (heteroskedastic predictors), the correlations between variables become much weaker after standardization, because:

Standardization destroys the intended covariance structure

We generate $$Y\sim\mathcal{N}(0,\Sigma)$$

Then we standardize each column. This transforms $$Y_j^{(std)} = \frac{X}{\sqrt{\Sigma_{jj}}}$$

So the transformed covariance matrix becomes: $$\Sigma_{jk}^{std} = \frac{\Sigma_{jk}}{\sqrt{\Sigma_{jj}\Sigma_{kk}}}$$

This means the effective correlation between variables j and k is: $$\rho_{jk}^{std} = \frac{\Sigma_{jk}}{\sqrt{\Sigma_{jj}\Sigma_{kk}}}$$

So when we increase the range of the variance but keep $\rho$ the same, in our case we went from $(0.5,2.0) \rightarrow (0.5,10)$ which means we assigned diagonal entries $\Sigma_{jj} \sim Unif(0.5,2.0)$

Our $\rho$ is still the same $0.95$. Since we multiply by the variance in the denominator the higher the variance can be the smaller the correlation gets. Now the correlated variables are barely correlated with the true variables.

Same Var range (0.5-2), correlation is 0.95, 8 true 7 correlated. Really Good plot¶

In [2]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso


# ======================================================================
# 1. EXTENDED DATA GENERATION
# ======================================================================
def generate_extended_data(n=200, p=200, n_true=8, n_corr=7, rho=0.5,
                           noise_std=0.5, seed=0):
    rng = np.random.default_rng(seed)

    # ---- Build covariance matrix ----
    Sigma = np.eye(p)

    # Put random diagonal entries (professor requirement)
    diag_entries = rng.uniform(0.5, 2.0, size=p)
    np.fill_diagonal(Sigma, diag_entries)

    true_idx = np.arange(n_true)
    corr_idx = np.arange(n_true, n_true + n_corr)

    for k, j in enumerate(corr_idx):
        t = true_idx[k % n_true]
        Sigma[j, t] = Sigma[t, j] = rho

    # Make positive semidefinite
    w, v = np.linalg.eigh(Sigma)
    Sigma = v @ np.diag(np.maximum(w, 1e-8)) @ v.T

    # ---- Generate predictors ----
    X = rng.multivariate_normal(np.zeros(p), Sigma, n)

    # Standardize columns ONCE
    X = (X - X.mean(axis=0)) / X.std(axis=0)

    # ---- True coefficients ----
    beta = np.zeros(p)
    beta[true_idx] = 1

    # ---- Generate response ----
    y = X @ beta + noise_std * rng.normal(size=n)

    return X, y, true_idx, corr_idx, Sigma



# ======================================================================
# 2. LAMBDA GRID (PAPER CORRECT)
# ======================================================================
def make_lambda_grid(X, y, n_lambdas=50):
    # PAPER λ_max = max |Xᵀ y|
    lam_max = np.max(np.abs(X.T @ y))
    lam_min = lam_max * 0.01

    # Decreasing grid (1 → 0)
    lam_grid = np.linspace(lam_max, lam_min, n_lambdas)
    return lam_grid, lam_max



# ======================================================================
# 3. SINGLE STABILITY RUN
# ======================================================================
def stability_single_run(X, y, subsample_idx, lam,
                         randomized, alpha_weak, rng):
    Xs = X[subsample_idx]
    ys = y[subsample_idx]
    m = len(subsample_idx)

    if randomized:
        # UNIFORM weights (paper correct!)
        W = rng.uniform(alpha_weak, 1.0, X.shape[1])
        Xs = Xs / W

    alpha_sklearn = lam / (2 * m)

    model = Lasso(alpha=alpha_sklearn, fit_intercept=False,
                  max_iter=20000, tol=1e-6)
    model.fit(Xs, ys)

    return (model.coef_ != 0).astype(float)



# ======================================================================
# 4. FULL STABILITY SELECTION
# ======================================================================
def stability_path(X, y, lam_grid, B=200,
                   randomized=False, alpha_weak=1.0, seed=0):
    rng = np.random.default_rng(seed) 
    n, p = X.shape

    m = n // 2
    L = len(lam_grid)

    sel = np.zeros((p, L))

    for b in range(B):
        subsample = rng.choice(n, m, replace=False) # Draw m = n/2 indices without replacement uniformly at random

        for j, lam in enumerate(lam_grid):
            sel[:, j] += stability_single_run(
                X, y, subsample, lam,
                randomized=randomized,
                alpha_weak=alpha_weak,
                rng=rng
            )

        if (b + 1) % 20 == 0:
            print(f"Completed {b+1}/{B}")
    
    return sel / B



# ======================================================================
# 5. PLOTTING
# ======================================================================
def plot_three_panels(sel1, sel05, sel02, lam_grid,
                      true_idx, corr_idx):

    lam_norm = lam_grid / lam_grid[0]   # normalized to [1 → 0]

    fig, axes = plt.subplots(1, 3, figsize=(18, 5))
    datasets = [(sel1, 1.0), (sel05, 0.5), (sel02, 0.2)]
    labels = ['(a) α = 1.0', '(b) α = 0.5', '(c) α = 0.2']

    for ax, (sel, alpha), lab in zip(axes, datasets, labels):

        # True vars (red)
        for j in true_idx:
            ax.plot(lam_norm, sel[j], c='darkred', lw=2)

        # Correlated false vars (blue dashed)
        for j in corr_idx:
            ax.plot(lam_norm, sel[j], c='navy', ls='--', lw=2)

        # Noise (faint gray)
        for k in range(sel.shape[0]):
            if k not in true_idx and k not in corr_idx:
                ax.plot(lam_norm, sel[k], c='gray', lw=0.2, alpha=0.2)

        ax.set_xlim(1, 0)
        ax.set_ylim(0, 1.05)
        ax.set_title(lab, fontsize=14)
        ax.set_xlabel("λ", fontsize=12)

    axes[0].set_ylabel("Π", fontsize=12)
    plt.tight_layout()
    plt.show()



# ======================================================================
# 6. MAIN SCRIPT
# ======================================================================
X, y, T, C, Sigma = generate_extended_data(rho=0.95,
                                    n_true=8,
                                    n_corr=7)

lam_grid, lam_max = make_lambda_grid(X, y)

sel1  = stability_path(X, y, lam_grid, randomized=False, B=100, seed=1)
sel05 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.5, B=100, seed=2)
sel02 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.2, B=100, seed=3)

plot_three_panels(sel1, sel05, sel02, lam_grid, T, C)
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.978e-04, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.036e-04, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.016e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.134e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.250e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.349e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.430e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.544e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.544e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.385e-03, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.567e-04, tolerance: 7.687e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.998e-03, tolerance: 8.496e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.949e-03, tolerance: 8.496e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.087e-03, tolerance: 8.496e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.130e-03, tolerance: 8.496e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.668e-03, tolerance: 7.828e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.769e-03, tolerance: 7.828e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.607e-03, tolerance: 7.828e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.083e-03, tolerance: 6.726e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.393e-03, tolerance: 6.726e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.641e-03, tolerance: 6.726e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.670e-04, tolerance: 6.726e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.681e-03, tolerance: 6.726e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.800e-03, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.552e-03, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.264e-03, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.936e-03, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.569e-03, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.164e-03, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.835e-04, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.844e-04, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.437e-04, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.543e-04, tolerance: 7.731e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.489e-03, tolerance: 7.781e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.636e-03, tolerance: 7.781e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.768e-03, tolerance: 7.781e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.879e-03, tolerance: 7.781e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.941e-03, tolerance: 7.781e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.936e-03, tolerance: 7.781e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.485e-03, tolerance: 7.781e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.672e-03, tolerance: 6.683e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.869e-04, tolerance: 6.683e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.931e-04, tolerance: 6.683e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.565e-04, tolerance: 6.683e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.349e-03, tolerance: 8.563e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.195e-03, tolerance: 8.563e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.080e-03, tolerance: 9.701e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.859e-03, tolerance: 9.701e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.613e-03, tolerance: 9.701e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.528e-03, tolerance: 9.701e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.094e-03, tolerance: 9.701e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.076e-03, tolerance: 9.701e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.566e-03, tolerance: 9.701e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.546e-03, tolerance: 8.435e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.050e-03, tolerance: 8.435e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.480e-03, tolerance: 8.435e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.830e-03, tolerance: 8.435e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.183e-03, tolerance: 8.435e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.106e-03, tolerance: 8.728e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.427e-03, tolerance: 8.069e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.923e-03, tolerance: 8.069e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.344e-03, tolerance: 8.069e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.711e-03, tolerance: 8.069e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.044e-03, tolerance: 8.069e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.786e-04, tolerance: 8.069e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.228e-03, tolerance: 7.416e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.056e-03, tolerance: 7.416e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.882e-03, tolerance: 7.416e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.989e-03, tolerance: 7.416e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.163e-03, tolerance: 7.416e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.418e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.559e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.594e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.509e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.303e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.379e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.554e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.235e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.192e-03, tolerance: 8.283e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.566e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.516e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.687e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.955e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.389e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.456e-03, tolerance: 8.462e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.362e-03, tolerance: 8.462e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.358e-03, tolerance: 8.462e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.065e-03, tolerance: 8.462e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.121e-03, tolerance: 8.462e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.180e-03, tolerance: 8.462e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.298e-03, tolerance: 8.741e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.967e-04, tolerance: 8.741e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.423e-03, tolerance: 8.741e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.956e-03, tolerance: 7.997e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.147e-03, tolerance: 7.997e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.101e-03, tolerance: 7.997e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.872e-03, tolerance: 7.997e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.546e-03, tolerance: 7.997e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.044e-03, tolerance: 7.997e-04
  model = cd_fast.enet_coordinate_descent(
Completed 20/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.516e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.494e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.446e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.372e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.272e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.146e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.979e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.777e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.728e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.385e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.812e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.597e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.318e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.577e-03, tolerance: 8.210e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.933e-03, tolerance: 9.987e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.361e-03, tolerance: 9.987e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.253e-03, tolerance: 9.987e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.708e-03, tolerance: 9.987e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.253e-03, tolerance: 9.987e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.590e-03, tolerance: 9.987e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.507e-03, tolerance: 9.601e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.398e-03, tolerance: 9.601e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.341e-03, tolerance: 8.489e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.411e-03, tolerance: 8.489e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.364e-03, tolerance: 8.489e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.182e-03, tolerance: 8.489e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.201e-03, tolerance: 8.230e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.419e-03, tolerance: 8.230e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.696e-04, tolerance: 8.230e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.411e-04, tolerance: 8.332e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.277e-04, tolerance: 8.332e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.623e-04, tolerance: 8.332e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.560e-03, tolerance: 8.145e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.237e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.425e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.589e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.762e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.944e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.109e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.220e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.186e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.990e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.352e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.209e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.905e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.544e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.123e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.644e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.105e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.221e-04, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.086e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.108e-03, tolerance: 8.431e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.245e-03, tolerance: 8.414e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.096e-03, tolerance: 8.414e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.918e-03, tolerance: 7.241e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.291e-03, tolerance: 7.241e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.458e-03, tolerance: 7.241e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.295e-03, tolerance: 7.241e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.506e-03, tolerance: 1.005e-03
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.685e-03, tolerance: 1.005e-03
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.238e-03, tolerance: 7.031e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.881e-03, tolerance: 7.031e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.923e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.237e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.354e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.387e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.845e-04, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.344e-03, tolerance: 7.594e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.484e-03, tolerance: 7.594e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.746e-03, tolerance: 7.594e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.916e-03, tolerance: 7.594e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.786e-03, tolerance: 7.594e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.273e-03, tolerance: 7.594e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.887e-03, tolerance: 6.578e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.695e-03, tolerance: 6.578e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.062e-03, tolerance: 6.578e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.287e-03, tolerance: 6.578e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.874e-03, tolerance: 6.578e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.044e-03, tolerance: 8.860e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.018e-03, tolerance: 7.791e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.020e-03, tolerance: 7.791e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.901e-03, tolerance: 7.791e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.866e-03, tolerance: 7.791e-04
  model = cd_fast.enet_coordinate_descent(
Completed 40/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.232e-03, tolerance: 8.815e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.757e-03, tolerance: 8.815e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.260e-03, tolerance: 8.815e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.849e-03, tolerance: 7.489e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.887e-03, tolerance: 7.489e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.620e-03, tolerance: 8.024e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.807e-03, tolerance: 8.024e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.030e-03, tolerance: 8.024e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.572e-04, tolerance: 8.024e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.165e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.699e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.175e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.035e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.252e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.502e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.778e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.016e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.172e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.015e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.022e-03, tolerance: 8.779e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.593e-03, tolerance: 8.367e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.510e-04, tolerance: 8.367e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.343e-04, tolerance: 8.367e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.134e-03, tolerance: 8.367e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.011e-03, tolerance: 9.220e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.741e-03, tolerance: 9.220e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.731e-03, tolerance: 9.220e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.205e-03, tolerance: 9.220e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.045e-03, tolerance: 8.391e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.708e-03, tolerance: 8.391e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.356e-03, tolerance: 8.391e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.060e-03, tolerance: 8.391e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.991e-04, tolerance: 8.391e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.016e-03, tolerance: 8.391e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.132e-03, tolerance: 8.391e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.433e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.957e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.381e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.698e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.925e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.960e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.860e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.520e-03, tolerance: 7.351e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.584e-03, tolerance: 8.425e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.172e-03, tolerance: 8.425e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.922e-04, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.102e-04, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.026e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.140e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.251e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.356e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.111e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.163e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.222e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.267e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.225e-03, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.121e-04, tolerance: 6.984e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.102e-03, tolerance: 8.468e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.237e-03, tolerance: 8.468e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.348e-03, tolerance: 8.468e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.382e-03, tolerance: 8.468e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.207e-03, tolerance: 8.468e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.307e-02, tolerance: 8.019e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.109e-02, tolerance: 8.019e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.567e-03, tolerance: 8.019e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.641e-04, tolerance: 7.087e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.381e-04, tolerance: 7.087e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.152e-04, tolerance: 7.087e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.773e-04, tolerance: 7.087e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.142e-04, tolerance: 7.087e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.143e-03, tolerance: 7.454e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.615e-03, tolerance: 7.454e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.399e-03, tolerance: 7.454e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.040e-04, tolerance: 7.454e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.979e-03, tolerance: 8.870e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.235e-03, tolerance: 8.870e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.409e-03, tolerance: 8.870e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.749e-03, tolerance: 8.342e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.521e-03, tolerance: 8.342e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.943e-03, tolerance: 6.790e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.149e-03, tolerance: 6.790e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.791e-04, tolerance: 6.790e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.600e-03, tolerance: 7.897e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.632e-03, tolerance: 7.897e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.964e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.834e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.678e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.494e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.283e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.044e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.782e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.516e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.225e-03, tolerance: 9.231e-04
  model = cd_fast.enet_coordinate_descent(
Completed 60/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.192e-03, tolerance: 7.981e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.531e-03, tolerance: 9.326e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.339e-03, tolerance: 9.326e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.196e-03, tolerance: 9.326e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.057e-03, tolerance: 8.886e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.545e-03, tolerance: 8.886e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.425e-03, tolerance: 8.886e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.433e-03, tolerance: 8.649e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.843e-03, tolerance: 8.649e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.168e-03, tolerance: 8.649e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.422e-03, tolerance: 7.991e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.038e-03, tolerance: 7.991e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.488e-03, tolerance: 7.991e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.225e-03, tolerance: 7.991e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.839e-03, tolerance: 7.991e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.407e-03, tolerance: 7.991e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.691e-03, tolerance: 8.741e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.451e-03, tolerance: 8.741e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.110e-03, tolerance: 8.741e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.391e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.425e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.237e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.087e-03, tolerance: 8.658e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.406e-03, tolerance: 7.822e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.938e-03, tolerance: 7.822e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.398e-03, tolerance: 7.822e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.956e-03, tolerance: 7.822e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.641e-03, tolerance: 7.822e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.357e-03, tolerance: 7.822e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.728e-03, tolerance: 7.822e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.589e-04, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.405e-04, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.026e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.117e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.212e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.311e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.395e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.485e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.539e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.320e-03, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.911e-04, tolerance: 8.358e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.452e-03, tolerance: 8.504e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.430e-03, tolerance: 8.504e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.390e-03, tolerance: 8.504e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.335e-03, tolerance: 8.504e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.271e-03, tolerance: 8.504e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.092e-03, tolerance: 8.504e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.581e-04, tolerance: 8.504e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.969e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.709e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.394e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.001e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.555e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.055e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.471e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.749e-03, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.673e-04, tolerance: 8.383e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.528e-03, tolerance: 8.811e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.375e-03, tolerance: 8.811e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.177e-03, tolerance: 8.811e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.573e-04, tolerance: 8.811e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.694e-03, tolerance: 9.052e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.703e-03, tolerance: 9.052e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.505e-03, tolerance: 9.052e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.322e-03, tolerance: 7.800e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.936e-03, tolerance: 7.800e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.520e-03, tolerance: 7.800e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.106e-03, tolerance: 7.800e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.235e-03, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.264e-03, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.257e-03, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.232e-03, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.187e-03, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.130e-03, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.060e-03, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.779e-04, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.836e-04, tolerance: 8.029e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.850e-03, tolerance: 6.584e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.278e-03, tolerance: 6.584e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.949e-03, tolerance: 6.584e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.140e-03, tolerance: 6.584e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.481e-03, tolerance: 6.584e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.474e-03, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.138e-03, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.901e-04, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.801e-04, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.021e-03, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.006e-03, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.038e-03, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.009e-03, tolerance: 7.740e-04
  model = cd_fast.enet_coordinate_descent(
Completed 80/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.380e-03, tolerance: 6.864e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.534e-03, tolerance: 6.864e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.313e-03, tolerance: 6.864e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.604e-03, tolerance: 6.864e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.865e-03, tolerance: 9.428e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.161e-03, tolerance: 9.428e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.514e-03, tolerance: 9.428e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.739e-03, tolerance: 9.428e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.392e-03, tolerance: 9.734e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.253e-03, tolerance: 9.734e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.172e-03, tolerance: 8.753e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.129e-03, tolerance: 8.753e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.218e-03, tolerance: 8.753e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.050e-03, tolerance: 8.948e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.135e-03, tolerance: 8.948e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.208e-03, tolerance: 8.948e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.253e-03, tolerance: 8.948e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.277e-03, tolerance: 8.948e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.236e-03, tolerance: 8.948e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.068e-03, tolerance: 8.948e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.519e-03, tolerance: 7.540e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.986e-03, tolerance: 7.540e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.374e-03, tolerance: 7.540e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.691e-03, tolerance: 7.540e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.873e-03, tolerance: 7.540e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.490e-04, tolerance: 7.540e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.694e-03, tolerance: 9.318e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.127e-03, tolerance: 9.318e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.413e-03, tolerance: 9.318e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.760e-03, tolerance: 9.318e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.646e-03, tolerance: 9.318e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.236e-03, tolerance: 9.318e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.995e-04, tolerance: 7.663e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.176e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.020e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.821e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.580e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.344e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.134e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.752e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.061e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.709e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.809e-03, tolerance: 8.477e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.431e-03, tolerance: 9.517e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.379e-03, tolerance: 9.517e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.671e-03, tolerance: 9.517e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.495e-03, tolerance: 8.264e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.419e-03, tolerance: 9.119e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 6.578e-03, tolerance: 9.119e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.462e-04, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.462e-04, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.042e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.075e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.109e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.168e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.086e-03, tolerance: 8.301e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.811e-03, tolerance: 8.170e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.337e-03, tolerance: 8.170e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.798e-03, tolerance: 8.170e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.194e-03, tolerance: 8.170e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.656e-04, tolerance: 8.170e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.522e-03, tolerance: 8.961e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.551e-03, tolerance: 8.961e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.488e-03, tolerance: 8.961e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.935e-03, tolerance: 7.411e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.673e-03, tolerance: 7.411e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.374e-03, tolerance: 7.411e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.039e-03, tolerance: 7.411e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.710e-04, tolerance: 7.411e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 7.697e-04, tolerance: 7.411e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 5.270e-03, tolerance: 9.004e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 3.826e-03, tolerance: 9.004e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.310e-03, tolerance: 9.004e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.134e-03, tolerance: 9.004e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.586e-03, tolerance: 7.451e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.496e-03, tolerance: 7.451e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.378e-03, tolerance: 7.451e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.231e-03, tolerance: 7.451e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.047e-03, tolerance: 7.451e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 8.393e-04, tolerance: 7.451e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.406e-03, tolerance: 7.919e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.247e-03, tolerance: 7.919e-04
  model = cd_fast.enet_coordinate_descent(
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 9.387e-04, tolerance: 7.919e-04
  model = cd_fast.enet_coordinate_descent(
Completed 100/100
Completed 20/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.095e-03, tolerance: 8.025e-04
  model = cd_fast.enet_coordinate_descent(
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
Completed 20/100
Completed 40/100
Completed 60/100
/home/exouser/StabilitySection/.venv/lib/python3.10/site-packages/sklearn/linear_model/_coordinate_descent.py:695: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 4.477e-03, tolerance: 8.866e-04
  model = cd_fast.enet_coordinate_descent(
Completed 80/100
Completed 100/100
No description has been provided for this image
In [5]:
print("Covariance matrix Sigma (first 15x15 block):")
def print_clean_matrix(Sigma, decimals=3, threshold=1e-6):
    Sigma_clean = np.round(Sigma, decimals)
    Sigma_clean[np.abs(Sigma_clean) < threshold] = 0.0
    print(Sigma_clean)
print_clean_matrix(Sigma[:15, :15])
Covariance matrix Sigma (first 15x15 block):
[[1.455 0.    0.    0.    0.    0.    0.    0.    0.95  0.    0.    0.
  0.    0.    0.   ]
 [0.    0.905 0.    0.    0.    0.    0.    0.    0.    0.95  0.    0.
  0.    0.    0.   ]
 [0.    0.    0.561 0.    0.    0.    0.    0.    0.    0.    0.95  0.
  0.    0.    0.   ]
 [0.    0.    0.    0.74  0.    0.    0.    0.    0.    0.    0.    0.732
  0.    0.    0.   ]
 [0.    0.    0.    0.    1.72  0.    0.    0.    0.    0.    0.    0.
  0.95  0.    0.   ]
 [0.    0.    0.    0.    0.    1.869 0.    0.    0.    0.    0.    0.
  0.    0.95  0.   ]
 [0.    0.    0.    0.    0.    0.    1.41  0.    0.    0.    0.    0.
  0.    0.    0.95 ]
 [0.    0.    0.    0.    0.    0.    0.    1.594 0.    0.    0.    0.
  0.    0.    0.   ]
 [0.95  0.    0.    0.    0.    0.    0.    0.    1.315 0.    0.    0.
  0.    0.    0.   ]
 [0.    0.95  0.    0.    0.    0.    0.    0.    0.    1.903 0.    0.
  0.    0.    0.   ]
 [0.    0.    0.95  0.    0.    0.    0.    0.    0.    0.    1.724 0.
  0.    0.    0.   ]
 [0.    0.    0.    0.732 0.    0.    0.    0.    0.    0.    0.    0.724
  0.    0.    0.   ]
 [0.    0.    0.    0.    0.95  0.    0.    0.    0.    0.    0.    0.
  1.786 0.    0.   ]
 [0.    0.    0.    0.    0.    0.95  0.    0.    0.    0.    0.    0.
  0.    0.55  0.   ]
 [0.    0.    0.    0.    0.    0.    0.95  0.    0.    0.    0.    0.
  0.    0.    1.594]]

Yes — I remember exactly what your professor suggested for your project extension. Here is the full list, reconstructed cleanly:


✅ Your professor’s suggested extension (verbatim meaning)¶

1️⃣ Use a different covariance matrix (Σ)¶

Not just the identity matrix. Specifically:

  • More complex diagonal entries (heterogeneous variances instead of all 1’s)

  • Correlations among variables not only between true vs. noise, but also:

2️⃣ Make more than 2 variables truly important¶

He specifically said:

  • Increase the number of true signal variables
  • Something like 6 or 8 true predictors

So that stability selection is tested in a more realistic “multi-signal” setting.


3️⃣ Add 1–2 more correlated variables¶

He wanted you to have:

  • a group of correlated irrelevant variables,
  • e.g., 3–4 variables correlated with one or more true variables.

This is to test whether stability selection prevents false positives in strongly correlated designs.


4️⃣ Use a high-correlation value (e.g. ρ > 0.5)¶

He said something like:

“Try ρ = 0.8 or something large.”

This is to explicitly violate the irrepresentable condition.


5️⃣ Prove that the irrepresentable condition (12) does NOT hold¶

He wanted you to show:

  • For your chosen Σ, the irrepresentable condition fails when ρ > 0.5
  • Identify exactly which values of ρ break it
  • If it’s too messy to prove for finite n, consider:

6️⃣ Show the limit as n → ∞¶

So that you can derive:

  • When Lasso fails variable selection consistency
  • Why stability selection might work better in that regime

🔥 The professor’s exact conceptual goal:¶

He wanted you to create a simulation where classical LASSO fails (because the irrepresentable condition is violated), and then demonstrate whether randomized LASSO + stability selection improves selection.

This is a direct extension of Figure 4 of Meinshausen–Bühlmann.


If you'd like, I can now help you with:¶

✔ the exact Σ matrix that matches your professor's request¶

(including different variances and controlled correlation blocks)

✔ explicit symbolic verification of the irrepresentable condition¶

(using your Σ and subset S)

✔ a clean proof that ρ > 0.5 violates condition (12)¶

✔ a rewritten simulation design section¶

that clearly connects to the professor’s suggestion

✔ tuning α so your plots look closer to the original Figure 4¶

Just tell me which part you want next.

Randomly assinging correlation, rho, is not constant for all the of correlated variables. Rho is uniformaly random from 0.5 to 0.95 STARRRRRR¶

In [15]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso


# ======================================================================
# 1. EXTENDED DATA GENERATION
# ======================================================================
def generate_extended_data(n=200, p=200, n_true=8, n_corr=7,
                           rho_range=(0.5, 0.95),
                           noise_std=0.5, seed=0):
    rng = np.random.default_rng(seed)

    # ---- Build covariance matrix ----
    Sigma = np.eye(p)

    # Random diagonal entries (heteroskedastic variances)
    diag_entries = rng.uniform(0.5, 2.0, size=p)
    np.fill_diagonal(Sigma, diag_entries)

    # True variables and correlated irrelevant variables
    true_idx = np.arange(n_true)
    corr_idx = np.arange(n_true, n_true + n_corr)

    # Each correlated variable gets its own rho_j ~ Uniform(rho_range)
    rho_values = rng.uniform(rho_range[0], rho_range[1], size=n_corr)

    for k, j in enumerate(corr_idx):
        t = true_idx[k % n_true]
        rho_j = rho_values[k]
        Sigma[j, t] = Sigma[t, j] = rho_j

    # Make Sigma PSD
    w, v = np.linalg.eigh(Sigma)
    Sigma = v @ np.diag(np.maximum(w, 1e-8)) @ v.T

    # ---- Generate predictors ----
    X = rng.multivariate_normal(np.zeros(p), Sigma, n)

    # Standardize columns
    X = (X - X.mean(axis=0)) / X.std(axis=0)

    # ---- True coefficients ----
    beta = np.zeros(p)
    beta[true_idx] = 1

    # ---- Generate response ----
    y = X @ beta + noise_std * rng.normal(size=n)

    return X, y, true_idx, corr_idx, Sigma



# ======================================================================
# 2. LAMBDA GRID (PAPER CORRECT)
# ======================================================================
def make_lambda_grid(X, y, n_lambdas=50):
    # PAPER λ_max = max |Xᵀ y|
    lam_max = np.max(np.abs(X.T @ y))
    lam_min = lam_max * 0.01

    # Decreasing grid (1 → 0)
    lam_grid = np.linspace(lam_max, lam_min, n_lambdas)
    return lam_grid, lam_max



# ======================================================================
# 3. SINGLE STABILITY RUN
# ======================================================================
def stability_single_run(X, y, subsample_idx, lam,
                         randomized, alpha_weak, rng):
    Xs = X[subsample_idx]
    ys = y[subsample_idx]
    m = len(subsample_idx)

    if randomized:
        # UNIFORM weights (paper correct!)
        W = rng.uniform(alpha_weak, 1.0, X.shape[1])
        Xs = Xs / W

    alpha_sklearn = lam / (2 * m)

    model = Lasso(alpha=alpha_sklearn, fit_intercept=False,
                  max_iter=20000, tol=1e-6)
    model.fit(Xs, ys)

    return (model.coef_ != 0).astype(float)



# ======================================================================
# 4. FULL STABILITY SELECTION
# ======================================================================
def stability_path(X, y, lam_grid, B=200,
                   randomized=False, alpha_weak=1.0, seed=0):
    rng = np.random.default_rng(seed)
    n, p = X.shape

    m = n // 2
    L = len(lam_grid)

    sel = np.zeros((p, L))

    for b in range(B):
        subsample = rng.choice(n, m, replace=False)

        for j, lam in enumerate(lam_grid):
            sel[:, j] += stability_single_run(
                X, y, subsample, lam,
                randomized=randomized,
                alpha_weak=alpha_weak,
                rng=rng
            )

        if (b + 1) % 20 == 0:
            print(f"Completed {b+1}/{B}")

    return sel / B



# ======================================================================
# 5. PLOTTING
# ======================================================================
def plot_three_panels(sel1, sel05, sel02, lam_grid,
                      true_idx, corr_idx):

    lam_norm = lam_grid / lam_grid[0]   # normalized to [1 → 0]

    fig, axes = plt.subplots(1, 3, figsize=(18, 5))
    datasets = [(sel1, 1.0), (sel05, 0.5), (sel02, 0.2)]
    labels = ['(a) α = 1.0', '(b) α = 0.5', '(c) α = 0.2']

    for ax, (sel, alpha), lab in zip(axes, datasets, labels):

        # True vars (red)
        for j in true_idx:
            ax.plot(lam_norm, sel[j], c='darkred', lw=2)

        # Correlated false vars (blue dashed)
        for j in corr_idx:
            ax.plot(lam_norm, sel[j], c='navy', ls='--', lw=2)

        # Noise (faint gray)
        for k in range(sel.shape[0]):
            if k not in true_idx and k not in corr_idx:
                ax.plot(lam_norm, sel[k], c='gray', lw=0.2, alpha=0.2)

        ax.set_xlim(1, 0)
        ax.set_ylim(0, 1.05)
        ax.set_title(lab, fontsize=14)
        ax.set_xlabel("λ", fontsize=12)

    axes[0].set_ylabel("Π", fontsize=12)
    plt.tight_layout()
    plt.show()



# ======================================================================
# 6. MAIN SCRIPT
# ======================================================================
X, y, T, C, Sigma = generate_extended_data(rho_range=(0.5, 0.95),
                                    n_true=8,
                                    n_corr=7,
                                    seed=0)

lam_grid, lam_max = make_lambda_grid(X, y)

sel1  = stability_path(X, y, lam_grid, randomized=False, B=100, seed=1)
sel05 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.5, B=100, seed=2)
sel02 = stability_path(X, y, lam_grid, randomized=True, alpha_weak=0.2, B=100, seed=3)

plot_three_panels(sel1, sel05, sel02, lam_grid, T, C)
Completed 20/100
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
Completed 20/100
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
Completed 20/100
Completed 40/100
Completed 60/100
Completed 80/100
Completed 100/100
No description has been provided for this image

Why is the true variable getting dropped under randomized lasso?¶

Randomized lasso applies random weights $W_k \sim \text{Uniform}(\alpha, 1)$ to each predictor. This makes the penalization different in each subsample: $$\hat{\beta}^{\lambda, W} = \argmin \left(||Y-X\beta||_2^2 + \lambda \sum_{k=1}^{p}\frac{|\beta_k|}{W_k} \right)$$

So if $W_j$ happens to be small on many subsamples, the effective penalty becomes big, so the variable is more likely to shrunk to zero. This can temporarily suppress even true variables. Note, that we only did a subsample of 100 bootstrap.

Is this bad?¶

Not necessarily. In fact, it is often the intended effect. Randomized lasso + stability selection trades: Higher precision (low false positives) for Slightly lower recall (can lose weak true variables). This is exactly the stability-selection principle:

Select variables that survive across randomized perturbations, not variables that survive every lasso fit, because variables that are not stable are untrustworthy.

In this situation, some true variables have weaker marginal signal (due to noise, random covariance). Under random reweighting, a weaker true coefficient may fall below the selection threshold in some subsamples. So its stability probability is lower. This is consistent with the theory. This is normal behavior in noisy, high-dimensional settings.

Short answer:

✅ No — you do not need to include the full proof of Theorem 1 in your report.¶

And in fact:

⭐ You SHOULD NOT include it unless your professor specifically asked you to.¶

Here’s why:


🔥 1. The proof of Theorem 1 in the paper is advanced research-level probability¶

It involves:

  • U-statistics
  • exchangeability arguments
  • combinatorial identities
  • pairwise selection indicators
  • delicate inequalities

It is not the kind of proof expected in a project where you reproduce numerical experiments and extend simulations.

Professors never expect students to reproduce or fully understand this proof unless it’s a theoretical statistics PhD qualifying exam.


🔥 2. Your professor’s project instructions say:¶

Explain the core ideas in your own words Extend numerical studies Reproduce a figure Run simulations Not reproduce 15 pages of probability theory

In fact, the rubric talks about:

  • clarity
  • understanding
  • simulation
  • code
  • presentation

Nowhere does it say:

“Reproduce all mathematical proofs from the paper.”


🔥 3. What is expected?¶

A conceptual explanation of why subsampling works:

Like this:

Stability selection repeatedly fits the model on random half-samples. A variable that is truly important will be selected consistently across subsamples. A noise variable will appear inconsistently. The probability of false selection is controlled because a noise variable rarely appears in two independent subsamples simultaneously. This allows Meinshausen & Bühlmann to derive a finite-sample bound for the expected number of false positives using pairwise subsample identities.

This is all that’s necessary.


🔥 4. The only theoretical content you need is:¶

I can format it into one clean paragraph for your report:


⭐ Final “Theory” section you should include (simple, clean, correct)¶

You may use this:


Why subsampling (m = n/2) is used in stability selection¶

Stability selection repeatedly fits a variable-selection method (e.g., lasso) on random subsamples of size (m = \lfloor n/2 \rfloor). Using half-samples ensures that two independently drawn subsamples contain sufficient overlap to compare variable selections, yet differ enough to reveal instability in noise variables. The key theoretical object is the pairwise selection probability, the chance that a variable is selected in two independent subsamples. True variables tend to be selected consistently across subsamples, whereas noise variables rarely appear in both. This distinction enables finite-sample control over false positives.


Theorem 1 (informal meaning)¶

Meinshausen and Bühlmann (2010) show that, under mild assumptions (mainly exchangeability of noise variables), stability selection guarantees an upper bound on the expected number of false positives:

[ \mathbb{E}[V] ;\le; \frac{q_\Lambda^2}{(2\pi_{\mathrm{thr}} - 1)p}. ]

Here:

  • (q_\Lambda) is the average number of variables selected over subsamples,
  • (p) is the total number of variables,
  • (\pi_{\mathrm{thr}}) is the selection probability threshold (typically 0.6–0.9).

This shows that choosing a sufficiently high (\pi_{\mathrm{thr}}) and keeping the average model size small ensures very few false positives.


Why the proof requires half-sampling¶

The proof uses two independently drawn half-samples. For this choice of (m = n/2):

  • pairwise selection indicators simplify,
  • the subsamples behave like a second-order U-statistic,
  • the expected overlap between subsamples is large enough to derive clean probability bounds,
  • the algebraic identity underlying Theorem 1 (Lemma 1 in the paper) holds with the correct constants.

This is why the finite-sample guarantee is stated specifically for (m = n/2).


This is all you need.

Nothing more.


⭐ So what to actually put in your project?¶

In your “Theory” section:

  • Include the statement of Theorem 1 (clean version).
  • Include the intuitive meaning.
  • Include why half-sampling is used.
  • DO NOT include the full proof.

Your project is applied, not theoretical.


If you want, I can format this into:¶

  • a LaTeX section ready to paste
  • a shorter slide version
  • or a “theory section” that directly fits your report structure

Just tell me which one you prefer.

The irrepresentable condition (12) in the paper is the finite-sample condition

$$ \eta_k = \big|\operatorname{sgn}(\beta_S)^\top (X_S^\top X_S)^{-1} X_S^\top X_k \big|. $$

For our simulation, the active set is $S=\{1,2\}$ and $\operatorname{sgn}(\beta_S)=(1,1)^\top$. The irrepresentable condition is satisfied if and only if

$$ \max_{k\in N} \eta_k < 1. $$

We evaluate this quantity directly from the simulated design matrix $X$, without using the population covariance $\Sigma$.